When Do the Measures of Knowledge Measure What We Think They Are Measuring?

Rajesh Kanwar, San Diego State University
Lorna Grund, San Diego State University
Jerry C. Olson, Pennsylvania State University
[ to cite ]:
Rajesh Kanwar, Lorna Grund, and Jerry C. Olson (1990) ,"When Do the Measures of Knowledge Measure What We Think They Are Measuring?", in NA - Advances in Consumer Research Volume 17, eds. Marvin E. Goldberg, Gerald Gorn, and Richard W. Pollay, Provo, UT : Association for Consumer Research, Pages: 603-608.

Advances in Consumer Research Volume 17, 1990      Pages 603-608

WHEN DO THE MEASURES OF KNOWLEDGE MEASURE WHAT WE THINK THEY ARE MEASURING?

Rajesh Kanwar, San Diego State University

Lorna Grund, San Diego State University

Jerry C. Olson, Pennsylvania State University

[This research was supported by the Science and Education Administration of the U.S. Department of Agriculture under Grant No. 5901-0401-8-01510 from the Competitive Grants Office to the third author.]

In order to investigate the effects of knowledge on behavior, researchers have used a variety of methods to measure knowledge. But, research results on the convergent validity of these measures of knowledge are equivocal. Some researchers have found significant correlations between measurement methods such as self-reports, free elicitation, and paper-and-pencil tests, while others have not. This study attempts to explain the inconsistent results. The results suggest that the convergent validity of the different measures may depend on what, and how much, consumers already know about a knowledge domain.

INTRODUCTION

Because of the common assumption that what people know affects their behaviors, researchers have spent considerable effort on exploring the effects of knowledge on consumer behavior (see Alba and Hutchinson 1987). For instance, researchers have investigated the effect of knowledge on information search (e.g., Brucks 1985), assimilation of new knowledge (e.g., Graesser and Nakamura 1982; Johnson and Russo 1984), choice processes (e.g., Bettman and Park 1980), information processing strategies (e.g., Fiske, Kinder and Larter 1983), problem solving processes (e.g., Sweller 1988) and perceptual processes (e.g., Obermiller and Wheatley 1984).

In this research, knowledge has been measured through a variety of methods such as subjective self-perceptions, paper-and-pencil tests (conventionally called objective tests), product ownership, usage experience with a product, and free response techniques such as free elicitation (see Mitchell 1982; Cole, Gaeth, and Singh 1986). These methods of measuring knowledge generally fall into two broad categories, direct and indirect measures of knowledge. Paper-and-pencil tests, or free-association methods such as free elicitation, are direct methods of measuring knowledge. They attempt to measure knowledge stored in memory. On the other hand, measures such as self-reports, or usage experience with a product, are indirect methods. They do not directly measure knowledge stored in memory. Rather, they measure individual characteristics that are thought to be related to people's knowledge in the domain of interest.

In this research on knowledge and behavior, researchers have often used just one of these methods to classify people as being either high or low in knowledge about a domain. This assumes that one method of measuring-knowledge is as good as another. That is, the methods are equivalent in that they all operationalize the same underlying construct--knowledge. But, research results on the convergent validity of these measures of knowledge (across and within the direct and indirect methods) are equivocal. Some researchers have found significant correlations between measurement methods such as-self-reports, free elicitation, and paper-and-pencil tests or questionnaires (e.g., Heneman 1974; Klimoski and London 1974; Cole, Gaeth, and Singh 1986; Dacin and Mitchell 1986; Marks and Olson 1980; Marzano and Costa 1988). But, other researchers report little or no correlation between such measures (e.g., Seigel and Pfieffer 1965; Kanwar, Olson, and Sims 1981; Marzano and Costa 1988).

Moreover, in some studies the different measures of knowledge have produced opposing predictions about behavior, thereby suggesting that the different methods are not measuring the same construct. For example, Brucks (1985) found that consumers' self-perceived knowledge was unrelated to the number of attributes about which they acquired information when making choices. However, when questionnaires were used to measure consumer knowledge she found that more knowledgeable consumers acquired information on more attributes than did the less knowledgeable consumers. Other researchers have reported similar results (Lichtenstein and Fischoff 1977; also see Park, Gardner, and Thukral 1988). Indeed, some researchers have gone so far as to suggest that some of these methods may be measuring concepts or constructs other than knowledge. For instance, Park and Lessig (1981) suggest that subjective measures such as self-reports may be measuring consumers' self-confidence rather than their knowledge in a domain. Other researchers have made similar suggestions (see Lichtenstein and Fischoff 1977).

Since researchers often use just one of the various methods to measure knowledge, it is important for us to understand why past research on the convergent validity of these measures has produced conflicting results. In order to choose among these alternative methods with any confidence, we need to know when these methods are, or are not, valid measures of knowledge. Otherwise, we may never be certain of the validity of our research.

In this study we make an attempt to develop such understanding. We first review some of the methods that have been used to measure knowledge. Next, we develop and test hypotheses about the conditions under which these alternative measures of knowledge are likely, or are not likely, to show convergent validity. We then test these hypotheses.

THE MEASURES: A REVIEW AND ANALYSIS

Of the direct and indirect methods, first consider two indirect methods of measuring knowledge--usage experience and self-reports (or self-assessment) [Usage experience may be measured by a question such as "how long have been using microwave ovens ?" And, self-assessed measures may be obtained through questions such as "on this 7 point scale could you please tell us how much you think you know about ?"]. When researchers use the amount of product experience to operationalize consumers' knowledge they assume that past usage and experience is directly related to consumers learning and knowledge. Unfortunately this assumption is often invalid (Brucks 1985; Mitchell 1982). Because of differential product involvement, consumers with similar amounts of usage experience may have learned different amounts about a product domain. In fact, some consumers who neither own, or use a product, but hope to do so in the future, may actively seek information about it, and accumulate large amounts of knowledge about it. For example a teenager who does not currently own a car may be highly knowledgeable about cars (Mitchell 1982). Thus, operationalizations such as ownership, or past usage experience, may erroneously classify knowledgeable individuals as being relatively less knowledgeable, or vice versa.

Moreover, there is considerable evidence that a person's self-assessed knowledge is often an inaccurate representation of actual knowledge (see Gentner and Collins 1981; Fox and Dinur 1988). As conventional in knowledge research, by actual knowledge we mean knowledge as measured by performance on conventional tests of knowledge such as objective multiple-choice tests (Abelson 1979; Saegert and Young 1982). Gentner and Collins (1981) suggest several reasons for the discrepancies between self-perceived and actual knowledge. One reason, which is particularly relevant to this study, is how much people already know in that domain (see Park, Gardner, and Thukral 1988). This literature suggests that, relatively more knowledgeable people provide more accurate self-reports than relatively less knowledgeable people. Gentner and Collins (1981) argue that because people usually learn about domain related concepts and terminology first (Brucks and Mitchell 1981; Brucks 1986), relatively less knowledgeable people are likely to judge their expertise in a domain by how many domain related concepts they know or think they understand, rather than what needs to be learned. That is, they are likely to judge their expertise by the number of domain related concepts they can recall from memory. This is particularly true for people who have not undergone formal training in a domain. For instance, research in educational counselling and assessment indicates that self-assessment of knowledge or learning is likely to be more accurate for students who have received feedback on their relative learning, than for students who have not received such feedback (see Laing 1988; Park, Gardner, and Thukral 1988). Consequently, errors of judgment regarding self-assessed knowledge are most likely for people who are not only relatively less knowledgeable in a domain, but have also learned the little they know through informal means, e.g., experience, rather than formal training (by formal training we mean situations where learning is accompanied by objective feedback on how much has been learned, or not learned).

In summary, two major testable propositions emerge from the above discussion. First, self-report measures of knowledge are likely to be highly correlated to objective measures of knowledge for relatively knowledgeable people who have undergone formal training (e.g., college courses) in a domain. But, the correlation between these two measures of knowledge is likely to be insignificant for people who are relatively less knowledgeable. This is particularly true for people who have learned the little they know through less formal means (e.g., experience), without objective feedback on how much they actually know. Second, as earlier discussion indicated, people who have acquired their knowledge about a domain through informal means are more likely, than people who have received formal training, to judge their knowledge in a domain by the number of concepts that they can recall from memory.

The number of salient concepts in memory (that is the number of concepts a person can recall) is precisely what free response measures of knowledge such as the free elicitation method [See Mitchell (1982), Kanwar, Olson, and Sims (1981), for a discussion of the theoretical basis of the free elicitation method.] attempt to operationalize. In the free elicitation method the experimenter says a word (a cue or probe) and asks the subject to verbalize all the thoughts that occur in response to that word. The concepts that subjects mention in response to the initial probe are recorded and then are used as probes themselves to further explore consumers' cognitive structures for a particular knowledge domain (down to some "level"). By using several domain related initial probes the researcher attempts to elicit as many concepts as possible. Obviously the elicitation procedure can continue indefinitely. Typically, the researcher uses empirical procedures to pre-select the combination of initial probes and levels that produce the largest number of unique domain related concepts, within the constraints of the subject time available (see Kanwar 1987). The total number of unique domain-related concepts elicited by each subject then serves as an index of the number of salient concepts in memory [Kanwar, Olson and Sims (1981) define this property of knowledge structures as dimensionality.].

Thus, because the free elicitation procedure measures the number of salient concepts in memory, the index it produces is likely to be highly correlated with self-report measures of knowledge for people who have acquired their knowledge, about a domain, through informal means. But, for people who have received formal education in the knowledge domain, this correlation is likely to be insignificant.

HYPOTHESES

In summary, we can generate two hypotheses regarding the conditions under which the different measures are likely to exhibit, or not exhibit, convergent validity.

H1: Self-assessed and objective measures of knowledge are likely to be highly correlated for relatively knowledgeable people who have undergone formal training in a knowledge domain, but not for people who have learned what they know through less formal means (e.g., experience).

H2: The free elicitation and self-assessed measures of knowledge are likely to be highly correlated for people who have acquired knowledge about a domain through informal means, but not for people with formal training in the domain.

METHODS

The study was carried out over two separate 1 to 2 hour sessions. During the first session, the free elicitation method was used to measure consumers' knowledge about nutrition and food. The second session occurred about a month later. During this session subjects filled out a questionnaire designed to obtain objective and subjective measures of subjects' knowledge about food and nutrition, and subjects' demographic characteristics. Subjects also conducted some other tasks during both sessions, but these are not relevant here.

Knowledge Domain

We selected nutrition and food as the knowledge domain of interest because it met two major criteria. First, we needed a knowledge domain in which formal training is usually available. The food and nutrition knowledge domain fulfilled this criteria because formal training programs on nutrition are available in most colleges and universities. Second, we needed a knowledge domain in which people can acquire knowledge through experience, rather than formal training. Again, the food and nutrition knowledge domain met this criteria. People not only make food decisions daily, but are being constantly exposed to nutritional information and concepts through the general media and processed food labels. Thus, they have ample opportunity to learn about food and nutrition without undergoing formal training. In addition, because food is important to health and well being, people are likely to be motivated to acquire such knowledge.

Subject Characteristics

In order to generate evidence on the issues raised above we needed two heterogeneous groups of subjects. We needed subjects who differed in the extent of their knowledge, formal training, and experience, about food and nutrition. One group of subjects had to have learned about food and nutrition through formal programs, such as college courses, that provide objective feedback on the learning that has occurred. A second group of subjects had to be not only relatively less knowledgeable than the first group, but to have learned about food and nutrition more through experience, rather than formal training. Consequently, we used two very different groups of subjects in this study--housewives and undergraduate students in nutrition.

Altogether 62 women participated in this study. Thirty one were female undergraduate students majoring in nutrition at a large eastern university. The remaining 31 subjects were adult women, mostly housewives, who resided in the same town as the university. However, data for four subjects had to be dropped because they either did not complete the procedures or were adult women who had had college level courses in nutrition. Since these latter subjects had learned about food and nutrition through both experience and formal training they did not meet our needs for subjects who had learned mostly through experience or formal=^training, but not both. Hence, they were dropped. Subjects were paid $5 per hour for their participation.

The average adult woman in the sample was 36 years old, married, had 2 children, and a family income between $15,000 and $20,000. She had 15 years of formal education, cooked about 83% of the family meals and bought 82% of the food for the household. None of them had college level courses in nutrition.

The average student was a 22 year old single woman, lived in a household of 2.4 people, and had about 15.6 years of formal education. She prepared about 69% of the meals in her household and bought about 75% of the food. Since all these students were nutrition majors they had had formal college level courses in nutrition.

As this description indicates, the housewives had greater responsibilities and broader experience in choosing foods, whereas the students had more formal education in nutrition. Thus, the undergraduate nutrition students were likely to have acquired their knowledge about food and nutrition through college level course. The housewives, on the other hand, were likely to have acquired their knowledge through their greater experience in making food choices.

Procedures

As mentioned earlier, the free elicitation procedure was administered during the first of the two sessions. Briefly, for this procedure, subjects were told to verbalize all the thoughts they had in response to a word the researcher would say (e.g., nutrition). Subjects were also told that the same procedure would be repeated with several words.

The procedure was first carried out with a practice probe to make sure that the subject understood the procedure. The researcher said the word "cars" and recorded the subjects responses (first level responses) to this probe on a pre-developed form. Whenever the subject stopped talking, the researcher asked her if she had any more thoughts. If she did, they were recorded as described. If she did not, the researcher picked the initial response to the "car" probe and used it as a probe, again recording her responses. After the subject had finished articulating her (second level) thoughts to this probe, the-researcher used the next first level response as a probe. The researcher conducted this procedure until all first level responses had been used as probes. Once the subject had understood the procedure the researcher conducted the free elicitation procedure, as described above, for each of six preselected probes. The six probes were presented in a random order for each subject. The entire session was unobtrusively tape recorded with the subject's permission.

The six probes used in this study were selected because pre-tests and previous work had demonstrated that these concepts were effective in tapping different portions of consumers' knowledge structures for food and nutrition.

Two judges later used transcripts of the tape recorded elicitations to code it for the number of unique nutritional concepts elicited. To aid them in the coding, the judges used a pre-developed list of food and nutritional concepts. The interjudge reliability was above .8. Disagreements between the two judges were resolved by a third judge.

The questionnaire used to measure subjects knowledge in this study was developed on the basis of the questionnaire used by Kanwar, Olson, and Sims (1981). Since Cronbach's Alpha for the 23 item questionnaire developed by these authors was only .68, the questionnaire was expanded to 85 factual multiple-choice questions on food and nutrition. These items were selected from a larger set of questions used in nutritional research. The questionnaire was tested with 150 undergraduate business students at a major south eastern university. Cronbach's Alpha for this expanded questionnaire was .84. As is com non, the total number of correct responses to this questionnaire was used as an indicator of subjects knowledge about nutrition. The questionnaire also contained questions on subjects' demographic characteristics and nutritional education levels, in addition to a single item 10 point scale designed to measure subjects' self-perceptions about their nutritional knowledge.

RESULTS

Table 1 presents the results of this study. As this table shows, the self-report measure of knowledge was significantly correlated with the scores on the knowledge test, for subjects who had had formal training in nutrition. However, for subjects who had not had such training, the correlation was not significant. Thus, the results support the first hypothesis. Consumers who have had formal training are able to assess their knowledge more accurately than subjects who have acquired their knowledge about a domain through informal means.

As Table 1 shows, the results also supported the second hypothesis. For consumers who did not have formal training in nutrition, the free elicitation j measure of knowledge was significantly correlated t with subjects "self-assessment" of their knowledge about nutrition. However, the correlation was not statistically significant for subjects who had had formal-training. Thus, it seems that when people are in doubt about their level of expertise in a knowledge domain, XX use the number of concepts that they can recall from memory to make judgments about their expertise in the domain. But, people who have undergone formal training in a domain, receiving formal feedback during this process (as in college courses), are likely to rely on the more accurate formal feedback (e.g., performance on examinations) to judge their knowledge.

DISCUSSION

As we mentioned earlier, researchers have commonly used a variety of direct and indirect methods to measure knowledge. Recent research on the validity of these measures has been equivocal, with some studies reporting high convergent validity, and others reporting low or no convergent validity, between the different methods. This study provides an explanation for these inconsistent i results. For example the results of this study explain why Kanwar, Olson and Sims (1981) found insignificant correlations between objective i measures of knowledge and the free elicitation and 0 repertory grid measures of knowledge. These authors used secretaries and housewives as subjects. This study's results suggest that the insignificant correlations occurred because such subjects are unlikely to have had formal training in nutrition. If these authors had used people who had had formal training in nutrition, the results may have been quite different.

In sum, this study's results suggest that alternative measures of knowledge may be equally valid under some circumstances, but not others. For instance, indirect and direct measures such as self-reports and objective tests are equally valid for measuring the knowledge levels of people who have had formal training in the domain of interest, but not for people who have not had such training. This difference arises because people who have had formal training in the domain seem to use different criteria to evaluate their knowledge about a domain than people who lack formal training. The latter, in the absence of formal evaluation of their knowledge about a domain, appear to use criteria such as how much information they can recall from memory to evaluate their knowledge about a domain. However, those with formal training are able to evaluate their knowledge about a subject more objectively because of the feedback they have received during their training or education.

TABLE 1

CORRELATIONS BETWEEN THE FREE ELICITATION, PAPER-AND-PENCIL, AND SELF-REPORT MEASURES OF KNOWLEDGE FOR PEOPLE WITH, AND WITHOUT, FORMAL TRAINING IN NUTRITION

However, the results of a single study can not be used to make major generalizations without further research. First, similar research must be conducted with knowledge domains other than food and nutrition. Second, this study was based on just three methods of measuring knowledge-- self-reports, free elicitation, and objective tests. However, several other methods such as product ownership, usage experience, the repertory grid, and response times (see Mitchell 1982), have also been used to measure knowledge. Research is needed to determine if the results of this study are also true for these methods. Third, some caution must be used in interpreting the results of this study because undergraduate students, unlike housewives, are constantly exposed to multiple choice tests. Thus, students may outperform housewives on such tests not because they have greater knowledge in the domain of interest, but because they are more proficient in taking multiple choice tests. Consequently, this study's results may reflect such differences in the test taking abilities of housewives and students. Finally, the relatively small sample size, although acceptable in an exploratory study, may have limited the power of the study. Future studies can address the latter two problems by using larger samples of subjects who have similar backgrounds except for the source of their learning-experience or formal training.

In conclusion, this study's results suggest that researchers investigating the effects of knowledge on behavior must be careful in their selection of the methods they use to measure knowledge in a domain. Not all measures are equally valid for all subjects. The methods used to measure knowledge must be suited to the characteristics of the subjects used in the study. Otherwise, the research is likely to result in misleading conclusions.

REFERENCES

Abelson, Robert P. (1979), "Differences Between Belief and Knowledge Systems," Cognitive Science, 3, 355-366.

Alba, Joseph W. and Wesley J. Hutchinson (1987),"Dimensions of Consumer Expertise," Journal of Consumer Research, 13, 411 454.

Bettman, James R. and Whan C. Park (1980), "Effects of Prior Knowledge and Experience and Phase of the Choice Process on Consumer Decision Processes: A Protocol Analysis," Journal of Consumer Research, 7, 234-248.

Brucks, Merrie (1986), "A Typology of Consumer Knowledge Content," in Advances in Consumer Research, Richard J. Lutz, ed., Provo, UT: Association for Consumer Research, 13, 58-63.

Brucks, Merrie (1985), '"The Effects of Product Class Knowledge on Information Search Behavior," Journal of Consumer Behavior, 12, 1-16.

Brucks, Merrie and Mitchell, A. Andrew (1981), "Knowledge Structures, Production Systems, and Decision Strategies," in Advances in Consumer Research, Kent B. Monroe, ed., Ann Arbor, MI: Association for Consumer Research, 8, 750-757.

Cole, Catherine A., Gary Gaeth, and Surendra N. Singh (1986), "Measuring Prior Knowledge," in Advances in Consumer Research, Richard J. Lutz, ed., Provo, UT: Association for Consumer Research, 13, 64-66.

Dacin, Peter A. and Andrew A. Mitchell (1986), 'The Measurement of Declarative Knowledge," in Advances in Consumer Research, Richard J. Lutz, ed., Provo, UT: Association for Consumer Research, 13, 454-459.

Fiske, Susan T., Donald R. Kinder and Michael W. Larter (1983), "The Novice and the Expert: Knowledge Based Strategies in Political Cognition," Journal of Experimental Social Psychology, 19, 381 400.

Fox, Shaul and Yossi Dinur (1988), "Validity of Self-Assessment: A Field Evaluation," Personnel Psychology, 41, 581-592.

Gentner, Dedre and Allan Collins (1981), "Studies of Inference from Lack of Knowledge," Memory and Cognition, 9 (4), 434443.

Graesser, C. Arthur and Glenn V. Nakamura (1982), "The Impact of a Schema on Comprehension and Memory," in Gordon H. Bower, ed., The Psychology of Learning and Motivation, New York: Academic Press, 16, 59-109.

Heneman, Herbert G. m (1974), "Comparisons of Self- and Superior Ratings of Managerial Performance," Journal of Applied Psychology, 59, 638-642.

Johnson, Eric J. and Edward J. Russo (1984), "Product Familiarity and Learning New Information," Journal of Consumer Research, 11, 542-550.

Kanwar, Rajesh (1987), 'The Role of Situational Factors, Consumer Knowledge, and Decision Goals in Consumer Decision Processes," Unpublished Dissertation, The Pennsylvania State University.

Kanwar, Rajesh, Jerry C. Olson, and Laura S. Sims (1981), "Toward Conceptualizing and Measuring Cognitive Structures," in Advances in Consumer Research, Kent B. Monroe, ed., Ann Arbor, MI: Association for Consumer Research, 8, 122-127.

Klimoski, Richard J. and Manuel London (1974), "Role of the Rater in Performance Appraisal," Journal of Applied Psychology, 59, 445-451.

Laing, Joan (1988), "Self-Report: Can it be of Value as an Assessment Technique," Journal of Counselling and Development, 67, 60-61.

Lichtenstein, Sarah and Baruch Fischoff (1977), "Do Those Who Know More Also Know More About How Much They Know," Organizational Behavior and Human Performance, 20, 159-183.

Marks, Larry J. and Jerry C. Olson (1981), "Toward a Cognitive Structure Conceptualization of Product Familiarity," in Advances in Consumer Research, Kent B. Monroe, ed., Ann Arbor, MI: Association for Consumer Research, 8, 145-150.

Marzano, Robert J. and Arthur L. Costa (1988), "Question: Do Standardized Tests Measure General Cognitive Skills? Answer No," Educational Leadership, May, 66-71.

Mitchell, Andrew A. (1982), "Models of Memory: Implications for Measuring Knowledge Structure," in Advances in Consumer Research, Andrew A. Mitchell, ed., Ann Arbor, MI: Association for Consumer Research, 9, 45-51.

Obermiller, Carl and John J. Wheatley (1984), "Price Effects on Choice and Perceptions Under Varying Conditions of Experience, Information, and Beliefs in Quality Differences," in Advances in Consumer Research, Thomas C. Kinnear, ed., Provo Utah: Association for Consumer Research, 11, 453-458.

Park, C. Whan, Meryl P. Gardner and Vinod K. Thukral (1988), "Self-perceived Knowledge: Some Effects on Information Processing for a Choice Task," American Journal of Psychology, 101 (3), 401-424.

Park, C. Whan and Lessig, V. Parker (1981), "Familiarity and Its Impact on Consumer Decision Biases and Heuristics," Journal of Consumer Research, 8, 223-230.

Saegert, Joel and Eleanor A. Young (1982), "Distinguishing Between Two Different Kinds of Consumer Nutrition Knowledge," in Advances in Consumer Research, Andrew A. Mitchell, ed., Ann Arbor, MI: Association for Consumer Research, 9, 342-347.

Seigel, Arthur I. and Mark G. Pfeiffer (1965), "Factorial Congruence in Criterion Development," Personnel Psychology, 18, 267-279.

Sweller, John (1988), "Cognitive Load During Problem Solving: Effects on Learning," Cognitive Science, 12, 257-285.

----------------------------------------