Experience and Expertise in Complex Decision Making

Jacob Jacoby, New York University
Tracy Troutman, Lever Brothers
Alfred Kuss, Free University, West Berlin
David Mazursky, Hebrew University, Jerusalem
ABSTRACT - This paper has two objectives. First, we posit some conceptual distinctions between experience and expertise and indicate their implications for measuring the latter. Specifically, expertise needs to be operationalized in term of either knowledge and/or performance based indices that can be compared to some external criterion. Second, a top line summary is provided of the findings from a recent investigation that utilized a performance based operationalization to study the differences in the predecision information accessing behavior of practicing security analysts.
[ to cite ]:
Jacob Jacoby, Tracy Troutman, Alfred Kuss, and David Mazursky (1986) ,"Experience and Expertise in Complex Decision Making", in NA - Advances in Consumer Research Volume 13, eds. Richard J. Lutz, Provo, UT : Association for Consumer Research, Pages: 469-472.

Advances in Consumer Research Volume 13, 1986      Pages 469-472

EXPERIENCE AND EXPERTISE IN COMPLEX DECISION MAKING

Jacob Jacoby, New York University

Tracy Troutman, Lever Brothers

Alfred Kuss, Free University, West Berlin

David Mazursky, Hebrew University, Jerusalem

ABSTRACT -

This paper has two objectives. First, we posit some conceptual distinctions between experience and expertise and indicate their implications for measuring the latter. Specifically, expertise needs to be operationalized in term of either knowledge and/or performance based indices that can be compared to some external criterion. Second, a top line summary is provided of the findings from a recent investigation that utilized a performance based operationalization to study the differences in the predecision information accessing behavior of practicing security analysts.

INTRODUCTION

The great number of factors that influence decision making may be organized into three broad categories: those relating to the nature of the decision task itself, the decision environment, and the decision maker. Theory (e.g., Bettman, 1979) suggests that one characteristic of the decision maker that should be particularly relevant is experience. Yet, at least with respect to the information search aspect of consumer decision making, the empirical literature has produced conflicting findings. Some studies suggest that people with higher levels of experience do less information search; others (e.g., Jacoby, Chestnut, Weigl and Fisher, 1976) suggest just the reverse.

Consumer research studies almost routinely include questions relating to past experience (e.g., purchase and/or consumption frequency and/or quantity). Few consider the related construct of expertise. This is not surprising since, as noted elsewhere (Jacoby, 1977a), consumer behavior offers few areas in which objective criteria exist for evaluating the quality of consumer decision making:

As Ross (1974) notes, the central problem in defining decision quality concerns use of appropriate criteria. One approach is to have "experts" identify what constitutes the "best" decision and then to assess the extent to which the subject is able to achieve this objective standard. Accordingly, we requested assistance from Procter & Gamble in identifying the single best laundry detergent available on the market for the purpose of using it as an external standard in our investigation. Their reply was illuminating. Though P&G believed that, overall, its products were superior to those of its competitors, there was no way to determine objectively which brand was best for all consumers because the specific needs, problems, and desires of individual consumers, and the types of stains, fabrics, etc., all varied considerably. What was best for one consumer or one type of laundry situation was not necessarily best for another consumer or another type of situation. It was obvious that, no matter which product category we approached, we had an identical problem (Jacoby, 1977, p. 571).

Without an objective basis for identifying decision quality, there is great difficulty in identifying just what constitutes expertise.

Selected insights on measuring expertise derived from the perceived quality and abilities literatures.

Notwithstanding the very substantial difficulties, there have been several attempts in the consumer behavior literature to incorporate the notion of expertise. Perhaps the first mention appears in Scitovsky's (1944) classic paper on perceived quality wherein he hypothesizes that experts would be less likely than non-experts to rely on price when arriving at a purchase decision. Though others speculated about the issue (e.g., Jacoby, Olson and Haddock, 1971, p. 578; Shapiro, 1968), it wasn't until 1972-1973 that attempts to assess the relationship began to materialize.

One investigation, Valenzi and Eldridge (1973), operationalized expertise via two self report questions probing the frequency and quantity of beer consumption. The subjects' answers to these two questions were combined and the distribution was split at the median in order to identify experts vs. non-experts. Aside from the methodological problems with self report data (e.g., no independent effort was made to verify that the stated frequency and quantity estimates indeed corresponded to any objective fact), and median splits (particularly based on the integration of two disparate distributions), it can be seen that a major conceptual problem was that the concept of expertise was operationalized in terms of experience. Yet these two constructs are actually conceptually orthogonal. One can have considerable experience, yet not be an expert. People at the same level of expertise may have different levels of experience, and vice versa.

The tendency to define expertise in terms of experience may stem, in part, from dictionary definitions of these terms. According to Webster's New Collegiate Dictionary: experience includes "something personally encountered; knowledge, skill or practice derived from direct observation or participation in event." The word experienced is defined as "made skillful or wise through observation of or participation in a particular activity." In contrast, the word expertise is defined as "skillful in a particular field." Reflecting upon these definitions suggests that both experience and expertise involve acquiring knowledge and/or skill. The essential distinction appears to be that expertise reflects qualitatively higher levels of either knowledge or skill. One implication is that if one wishes to study expertise, then that construct needs to be operationalized as more than "something personally encountered," as would be the case if one simple assessed purchase ant/or consumption frequency and/or quantity. While drinking twenty-six packs a day may make one experienced, heavy and drunk, it would not necessarily make one an expert on beer. "Personally encountering" does not equate to "expertise." Any measure that purports to assess expertise needs to utilize indicants that reflect either knowledge and/or skill.

A second study, conducted over the 1972-3 academic year, relied on such a rationale to derive a knowledge based test of expertise (Jacoby and Williams, unpublished). A full description of how this test was developed is provided in a replication (Williams_Jones, 1974) which incorporated additional levels of the independent variable and followed immediately.

Briefly, a pool of items designed to assess stereo and high fidelity knowledge was developed, revised, pared down, and administered to a set of 50 pre-test subjects whose scores spanned virtually the entire range. The test was then administered to 12 stereo repair technicians whose scores ranged from 22 - 30. Based upon these results and a consideration of the two distributions, cut offs were assigned as follows: people scoring 22 or above were defined as experts and those scoring 12 or below were defined as non-experts. As applied to the 487 subjects employed in the replication study, 25% of the subjects were classified as experts and 23% as inexpert. This same test of expertise was later used to investigate whether people classified as opinion leaders in the realm of stereo equipment actually knew more about stereo concepts - they did (r = .69) - and a complete copy of the test is reprinted in that source (Jacoby and Hoyer, 1981).

Of present interest, the Williams-Jones and Jacoby replication also included four experience measures: stereo ownership, frequency of usage, number of magazines read each month which were directly relevant to stereos and stereo equipment, and number of electronic related courses that the individual had completed. Due to the large sample size (n = 487), the correlations between scores on these experience indicants and scores on the test of expertise (.24, .20, .25, and .26, respectively) were all significant at p = .001 or better. However, the low correlations means that none of these experience indicants explained more than 7% of the variance in the expertise scores. (N.B. The .24 correlation with ownership compares favorably to the .25 correlation obtained several years later by Jacoby and Hoyer, 1981.)

Moreover, knowledge does not always equate to skill. Demonstrating that one is an expert based on one's performance on a knowledge test is not necessarily equivalent to being able to exhibit that skill in actual task performance. Coaches may be exceedingly knowledgeable while, at the same time, exceedingly incapable of executing that knowledge. Of course, one might argue that these coaches are expert coaches, not expert players, and their players may be expert players, but not they are not expert coaches. It remains true, however, that many of the elements that constitute either coaching or playing expertise are incapable of being assessed via knowledge tests. Being able to do something doesn't necessarily mean one is able to identify or articulate what it is that one does. As Polyani's (1966) notion of tacit knowledge suggests, many aspects of performance are encoded (if at all) in hazy, non-verbal form.

The results of a study by Denisi and Shaw (1977) are instructive here. These authors had subjects first rate their abilities in ten areas and then administered a battery of standard, commercially available tests to measure these abilities. These authors report: "Correlations between self-rated and tested abilities, although generally significant, were too small to have any practical significance. The self-ratings were also unable to differentiate between those who would score high and low on the ability tests, even for extreme self-rated groups. No moderator effects were fount. It was concluded that self reports of ability could not substitute for ability tests" ( p. 641). The median correlation between self reports of abilities and actual abilities across all ten areas hovered near .3 and no single self report measure explained more than 16% of the variance in the corresponding test score.

STUDYING DECISION MAKER EXPERTISE

Though far from a thorough discussion of the relevant experience/expertise literature, the work noted above was paramount among those things that influenced the senior author's thinking in developing the study reported below. Briefly, both prior theory and research (as well as common sense) suggests that pre-decision information accessing behavior of experts might differ appreciably from those who were inexpert. To effectively study whether this was indeed so required a satisfactory means of operationalizing expertise. This eliminates both experience based indices (e.g., those which probe the quantity and/or frequency and/or variety of purchase and/or usage experiences) and self reports of expertise. Though expertise might also be approached sociometrically (e.g., via peer ratings), we know of no such approach that has been used and validated (against an appropriate criterion) in the consumer behavior realm. Finally, though a knowledge based measure might be appropriate for assessing the knowledge component of expertise, it would be inadequate for assessing the other principal facet of expertise, namely skill.

This eventually led to outlining a performance based study in 1977. Implementation of this research had to wait until the appropriate hardware (computers for containing and presenting the information environment) could be obtained. The study itself, which had a variety of other research objectives as well, was implemented over the 1981-82 academic year. Reports based on this database have either been published (Jacoby, Mazursky, Troutman and Kuss, 1984), are in press (Jacoby, Kuss, Mazursky and Troutman, 1985), or are under review (Jacoby, Jaccard, Kuss, Troutman and Mazursky, submitted; Jacoby, Russ, Troutman and Mazursky, submitted ). Beyond raising the conceptual issues noted above, a second objective of the present paper is to highlight some of the major findings from this investigation and to direct the interested reader to these other, more complete reports.

Overview of Methods/Procedures.

The nature of the subjects, decision task, task instructions, setting and procedure have already been described in considerable detail elsewhere (Jacoby, et al, 1984, 1985). Suffice it to say that seventeen professional security analysts participated in a behavioral process simulation (see Jacoby, 1977b, for a definition of terms) of security analyst decision making.

The analysts were motivated to participate by virtue of two incentives. First, the analyst who performed the best on the task was awarded $500. Second and probably more important in an industry where recognition of one's performance counts highly, all the analysts knew that press releases would be issued to the relevant media publicizing the name of the winner.

Since it reflects upon their experience, note that the median age of these analysts was in the upper 30's, and their careers as professional security analysts ranged from 1.5 to 17 years ( mean = 6.9 years; s.d. = 5.7 years). Fifteen of the analysts had Masters degrees, two held B.S. degrees. Two analysts declined to respond to an item regarding the income they derive from their activities as professional security analysts. Another eight indicated that there income was below $75,000 a year; seven indicated that their income was above this amount.

The task objective was to select the "best buy" (defined as that stock most likely to show the greatest percentage of growth in price per share over the next ninety day period) from among eight securities for each of four successive ninety day periods. Except for the names of these securities, all information provided to the analysts was authentic and taken from the 1969-1970 period when these securities were listed on the New York Stock Exchange. (Post test manipulations revealed that only one of the 17 analysts correctly ascertained the identity of one of the eight stocks. Since there were 17 x 8 = 134 opportunities for someone to make such a correct identification, the one correct identification suggests that the camouflage manipulation worked as intended.) Subjects could access any of 26 types of fundamental factor m formation regarding each of the eight securities. Both pre- and post-investigation efforts suggest that, with few exceptions, these 26 factors are those of greatest interest and use to professional security analysts.

Operationalizing Expertise: The performance criterion used to distinguish between the better and poorer analysts was based directly on the increases in price per share of each security, calculated separately for each of the four test periods. The analysts were arranged in descending order, based upon the aggregate net yield they produced across the four periods.

Summary of Findings

Expertise and Experience: Across the entire sample of 17 analysts, task performance (i.e., expertise) correlated nonsignificantly with age (r = -.22) and length of time with present employer (r = -.37). Most noteworthy, performance correlated negligibly with either experience (defined in terms of number of years working as a professional security analyst (r = . 03) or income (r = 04

The depth, content and sequence of information accessing behavior were examined both for the entire sample and separately for the five (or seven) best versus the five (or seven) poorest performing analysts. These data and analyses were quite extensive and proved to be too much for any single article. Indeed, despite the five papers that have thus far been prepared, a number of the analyses have not yet been reported. Hence, what follows is necessarily a highly restricted "top line" summary of some of the findings.

Depth of Search: Expertise appeared to be related to the overall depth of search. When each analyst's performance score was correlated with the number of items acquired across all four periods, a significant relationship was revealed (r = .41; p = 05). However, much of this result is due to the extensive accessing of the top performing analyst. When his data are removed, the correlation for the remaining 16 analysts hovers near .2.

Second, there was no significant difference on the molar dimensions of information. That is, experts and inexperts did not differ on the number of stocks (median = 8) or number of fundamental factors (median = 9) considered across the four test periods. A period-by-period analysis revealed that, while the better analysts accessed a relatively constant 8.6 + .4 properties across the four periods, the poorer analysts displayed a very high initial accessing rate (13.6 properties during the first period) followed by a dramatic and significant (p = .02) drop for the second, third and fourth periods (where the median was 7.2 properties).

Third, the more expert analysts were significantly more thorough in examining the stocks and factors that they had looked at. This is determined by examining the "percent of submatrix" accessed. While the 26 factors in eight stocks made available to each subject represented the experimenter-determined information matrix, not every piece of information in this matrix may have necessarily been useful, worthwhile or meaningful to any given subject. Hence, it is insightful to adopt a respondent oriented perspective and ask: If attention were limited to only those stocks and factors considered by the analyst at least once, then what percent of the information from that submatrix was accessed? The results indicate that better performing analysts were consistently more thorough in examining the contents of their submatrices (77% overall) than were poorer performing analysts in examining the contents of their submatrices (55.5% overall).

Much greater information on depth of search is provided in the ComPuter in Human Behavior article, with supplemental material appearing in the Journal of Applied Psychology piece.

Content of Search: The differences between expert and inexperts in terms of the type of information accessed was substantial. First, both expert and inexpert analysts devoted approximately 45% of their total information accessing to only 4 of the 26 available factors and only one of these four factors (12 month price/earnings ratio) was common across both groups (see Jacoby, Kuss, Mazursky and Troutman, 1985).

Second, when tests were applied on a factor-by-factor basis, significant differences (at the .01 level or better) were found for 16 of the 26 factors. As one example, though the better analysts devoted only 1% of all their information accessing behavior to acquiring "interim earnings for the previous year expressed in terms of price per common share" information, this factor counted for 9% of the information accessing of the poorer analysts (see Jacoby, et al, 1985).

Two types of information are of particular interest. The first is feedback information. Since it was one of the 26 factors (namely, "Perceived price change over the past three months"), feedback was made available as part of the external information environment. Most discussions suggest a positive relationship between feedback and performance. However, as explained in Jacoby, Mazursky, Troutman and [uss (1984), there are more compelling reasons for postulating an inverse relationship when feedback provides only descriptive information regarding the prior outcome without providing any diagnostic information (i.e., information which has either predictive and/or explanatory value). Our data are consistent with this expectation. Across all 17 analysts, the correlation between expertise and the accessing of outcome-only feedback was -.48 (p = .02).

Subsequent to publication of the above reports, one of our colleagues (Prof. Martin Gruber, Editor of the Journal of Finance) suggested that the data be reanalyzed to see what insights might be provided on a long-standing controversy in the domains of finance and accounting. Specifically, the question was whether better and poorer analysts could be differentiated in terms of their use of accounting based information. Accordingly, the 26 factors were apportioned into three categories: fully accounting based, partially accounting based, and non accounting based. A paper detailing these analyses is currently under review (Jacoby, Kuss, Troutman, and Mazursky, submitted); hence these data are not described here.

Sequence of Search: Probably the most dramatic differences of all were obtained with respect to the information accessing sequences employed by the better vs. poorer performing analysts. Those who were more expert were overwhelmingly Type 3 (within property; see Jacoby, Chestnut, Weigl and Fisher, 1976, for a discussion of the operationalization of this index) information accessors. In contrast, the inexpert analysts devoted nearly equal proportions of their search to within property and within option search. These data are detailed in a paper now under review (Jacoby, Jaccard, Kuss, Troutman and Mazursky, submitted).

DISCUSSION

This report had two principal objectives. The first was to raise some perspectives regarding the distinction between experience and expertise - perspectives that have both conceptual and operational implications. The second was to highlight several of the major findings emanating from a recent investigation which, using a performance based procedure to assess expertise, revealed strong differences between expert and inexpert behavior in regard to pre-decision information accessing.

Several concluding comments are in order. First, virtually every operationalization is flawed in some respect. A significant flaw inherent in using performance as an indicant of expertise is the danger that the subject may make an objectively good decision, but for the wrong reason, or vice versa. This is especially true in the realm of security analysis, since the stock market doesn't necessarily function in a rational fashion.

Second, even if this measure were entirely accurate, it still represents an assessment of only the skill/performance aspect of expertise, not the knowledge aspect. Though the data reveal no relationship between expertise and experience (the correlation between performance and number of years as an analyst was only r = .03), a study is still called for that would assess both knowledge and skill components in the same investigation.

(N.B. A number of other criticisms may be found in the other reports. Also note that the present paper was prepared at "the last minute" by the senior author without any opportunity for review by the junior authors. Hence, he assumes all responsibility for any flaws.)

REFERENCES

Bettman, J.R. (1979) An information processing theory of consumer choice. Addison-Wesley, Reading, Mass.

DeNisi, A.S. and Shaw, J.B. (1977) Investigation of the uses of self-reports of abilities. Journal of Applied Psychology" 62 (5), 641-644.

Jacoby, J. (1977a). Information load and decision quality: Some contested issues. Journal of Marketing Research.

Jacoby, J. (1977b) The emerging behavioral process technology in consumer decision making research. In W.D. Perrault Jr. (Ed.) Advances in Consumer Research, 4, 263-265

Jacoby, J., Chestnut, R.W., Weigl, t.C., & Fisher, W.A. (1976). Pre-purchase information acquisition: Description of a process methodology, research paradigm, and pilot investigation. In B.B. Anderson (Ed.) Advances in Consumer Research, 3, 305-313.

Jacoby, J. & Hoyer, W.D. (1981), what if opinion leaders didn't know more? A question of nomological validity. In [. Monroe (Ed.) Advances in Consumer Research, 8, 299-303.

Jacoby, J., Jaccard, J., Kuss, A., Troutman, T., & Mazursky, D. (Submitted). New directions in behavioral process research.

Jacoby, J.; Russ, A; Mazursky, D; and Troutman, T. (1985). Effectiveness of security analyst information accessing strategies; a computer interactive assessment. Computers in Human Behavior, in press.

Jacoby, J; Kuss, A; Troutman, T; and Mazursky, D. (Submitted) A note on the relationship between usage/nonusage of accounting information and effective security analyst decision making

Jacoby, J., Mazursky, D., Troutman, T., & Kuss, A. (1984). When feedback is ignored: The disutility of outcome feedback. Journal of Applied Psychology, 69, 531-545.

Jacoby, J.; Olson, J.C.; & Haddock, R.A. (1971). Price, brand name and product composition characteristics as determinants of perceived quality. Journal of Applied Psychology, 55, 570-579.

Jacoby, J. and Williams, J.A. (1973). Price cue utilization in quality judgments as a function of consumer expertise. Unpublished manuscript.

Polyani, M. (1963). The Tacit Dimension. Doubleday: Anchor. New York.

Scitovsky, R. (1944-45) Some consequences of the habit of judging quality by price. The Review of Economic Studies, 12, 32.

Shapiro, B.P. (1968) The psychology of pricing. Harvard Business Review, 41, 14-16, 18, 20, 22, 24-25, 160.

Valenzi, E. and Eldridge, L. (1973). Effects of price information, composition differences, expertise and rating scale on product quality ratings. proceedings. 81st Annual Convention, American Psychological Association, pp. 829-830.

Williams-Jones, J.A. (1974) Price cue utilization in quality judgments as a function of consumer expertise: A replication. Unpublished M.S. Thesis, Purdue University.

----------------------------------------