Issues in Measuring Abstract - Constructs

Judith Lynne Zaichkowsky, Simon Fraser University
[ to cite ]:
Judith Lynne Zaichkowsky (1990) ,"Issues in Measuring Abstract - Constructs", in NA - Advances in Consumer Research Volume 17, eds. Marvin E. Goldberg, Gerald Gorn, and Richard W. Pollay, Provo, UT : Association for Consumer Research, Pages: 616-618.

Advances in Consumer Research Volume 17, 1990      Pages 616-618

ISSUES IN MEASURING ABSTRACT - CONSTRUCTS

Judith Lynne Zaichkowsky, Simon Fraser University

INTRODUCTION

The three papers presented in this competitive session attempt to clarify different constructs of involvement, knowledge and values. The paper by Jain and Srinivasan deals with trying to reduce a large number of possible indicators of involvement to a small number and then interpreting the resulting grouping of items as types of involvement. The paper by Kanwar, Grund and Olson attempts to demonstrate that self-report measures of knowledge may be representative of objective knowledge structures when elicited from people formally trained in that area. The paper by Shrum, McCarty and Loeffler looks at the interaction between private self-consciousness and value stability. The link among the papers lies only in their attempt to clarify the abstract constructs, therefore each paper will be discussed separately.

"AN EMPIRICAL ASSESSMENT OF MULTIPLE OPERATIONALIZATIONS OF INVOLVEMENT"

In the last five years I have read more than a dozen papers on improving measures of involvement. This fact leads me to believe that when a researcher disagrees with how others have offered to measure involvement, that researcher feels it is easy to construct their own "better" or more suitable measure of involvement. This is truly a noble cause, however, I believe it is difficult to improve on a measure in a purely empirical fashion as attempted in this paper.

First of all the present paper draws on items reported in several studies on involvement. However, the items in each of the previous studies were developed for several different purposes based on different conceptualizations of involvement. Laurent and Kapferer (1985) wanted to measure involvement with products and first conceptualized involvement as having five facets, namely importance, pleasure, sign-value, risk probability and risk importance. Therefore, they developed items to represent five dimensions. On factoring the items, they found importance and risk importance of the product to be indistinguishable in the minds of their respondents. Zaichkowsky (1985) developed a context-free measure of involvement applicable to products, advertisements and purchase decisions. Items were selected for the scale by "expert judges" to represent a single definition of involvement, namely personal relevance as reflected by needs, values and interest. Ratchford (1987) items related the decision of purchasing a product, not involvement with the product per se. The scales by Higie and Feick (1989) and McQuarrie and Munsen (1987) incorporate several items from Zaichkowsky (1985) along with items that these researchers subjectively felt captured a more hedonic enduring aspect of involvement with products.

I am unsure of the usefulness of combining measures developed for different purposes to see what emerges. Basically, you will get out what you put in. If you put five dimensions in, you should get five dimensions out. That aside, let me comment on the methodology. There is great variation due to the 10 products. Two products are more hedonic (chocolate, cologne) than the others and should elicit a different factor structure (Zaichkowsky, 1987). I would guess, based on my own experience, that if we factored each product separately, a different solution or structure would appear for each product. Some products are just very different in the type of involvement they elicit. This fact is demonstrated in Table 5 of the Jain and Srinivasan paper. What are the implications of irrelevant dimensions for some products?

What the Jain and Srinivasan paper contributes is a another measure of five facets of involvement. If shorter is better, then maybe it is better, but there is no test of stability of the 15 item scale, criterion validity is not checked and the construct validity study provided is vague. Who were the respondents, what was the sample size, and what was the context of the study? Conceptually, what is the relevance of the five facets of involvement found in Tables 4 and 5? This part of the paper needs much more elaboration.

The bottom line is that this paper empirically offers the literature some variations on current measures of involvement. The success of the paper may be left to other researchers' use of the suggested measures. This is now a cold topic for consumer behavior researchers. I do not think there is much interest in one more paper that empirically attempts to suggest a better measure of involvement. However, let me leave the researchers with some ideas on what I think would be a more interesting study on the structure of involvement. Given that (I) there is a great variation among products on average level of involvement, (2) people vary widely on their level of product involvement, and (3) there seems to be different types of involvement, a three-mode factor analysis may shed light on the product/person/facet of involvement structure. This would allow us to investigate the different relationships between products, involvement and people. In other words, what kind of people view what kind of products in what view of involvement? Some products may be mainly pleasure products and some mainly risk products. What kind of people view haircuts as hedonic and what kind of people view haircuts as mainly risk? These types of questions are conceptually interesting to me as an involvement researcher. I hope the researchers continue to do work in this area, but implore them to move on from measuring involvement to discovering relationships between consumer behavior and levels of involvement.

"WHEN DO MEASURES OF KNOWLEDGE MEASURE WHAT WE THINK THEY ARE MEASURING?"

The clarification of the concept of knowledge, the process by which we acquire knowledge, and the implications of having knowledge are of fundamental concern to consumer behavior researchers. I would like to think that we all know that simple subjective self-report measures of knowledge are not representative of true objective knowledge structures. However, this paper tries to demonstrate that self-report measures may be correlated with more complex measures when dealing with experts -- experts being defined here as those with formal training. It also suggests that a free-elicitation method of abstracting knowledge is associated with self-reports of knowledge from nonexperts. To me there are some complex issues here that have to do with knowledge obtained by experience versus knowledge obtained by training which may be germaine to the argument at hand.

That aside, the first problem I had was understanding the free-elicitation method and its validity as a measure of knowledge. I would like to know what the six probes were for nutrition and the number of responses elicited for each group under each probe. The pre-developed list of food and nutrition concepts seems crucial for coding. Where did it come from? How valid is the list?

The decision to increase the objective knowledge test from 23 items to 85 items to increase reliability is misleading. A Cronbach Alpha of .68 for the 23 item test is actually more reliable than the .84 Alpha for the 85 item test. The formula (Nunnally, 1978) for deciding the number of items based on the reliability desired is:

            rkk(1-r11)

k=        r11(1-rkk)

where

rkk = desired reliability

r11 = reliability of existing test

k = number of times test would have to be lengthened (shortened) to obtain reliability of rkk.

Given a reliability of .68 on a 23 item test and a desired reliability of .85, we can solve for

            .85(1-.68)

k=        .68(1-.85)          = 2.67

Therefore, 2.67 x 23 items = 61 items should give a reliability of .85. The reliability of .84 on 85 items indicates less reliability than the original 23 item test. Clearly some of the 85 items should have been omitted due to low item-to-total correlations. There is no indication if the 85 items are a good measure of nutrition knowledge. We are not told what average scores were on the 85 items or what the variation in scores were among the groups.

Given that the two groups differed in age, marital status and probably with quantity of direct experience with food preparation, it is likely that the free elicitation method tapped into the subjects' knowledge based on prior use. In my research, I find experience or prior product use and subjective knowledge are usually correlated, while prior product use and objective knowledge are not necessarily correlated (Zaichkowsky, 1985b). Is that what we find in Table 1 where free elicitation and subject knowledge are correlated for housewives but not students?

In conclusion, I think the researchers are working in an important but difficult area. As a whole, I believe the topic area is tepid. The reason I say this is because special ACR sessions about knowledge and its relation to other constructs were rejected based on a low level of interest by other researchers. It may not be fashionable to work on this area, but I think many contributions are still to be made with respect to knowledge structures.

"INDIVIDUAL DIFFERENCES IN VALUE STABILITY"

The contribution of this paper is that it nicely demonstrates causes for instability in doing research on values. The usefulness of the construct "self-consciousness" as a covariate for all researchers who work on values is made explicit. This assumes, of course, normality of the data presented in this paper. I could not assess this point accurately, given the presentation of the data. Therefore, I would like to address the data analysis to raise some concerns I had about the paper and the methods used by the researchers.

First of all, the power of the experiment appears very low. If there were 12 treatments (3 communication x 2 order x 2 self-consciousness = 12 cells) and only 130 subjects, then only about 10 people appeared in each cell. Additionally, cell sizes were likely very unequal due to splitting on the self-consciousness measures. The authors were careful not to report on the exact number of people in each treatment. If the distribution among cells was very skewed with these small cell sizes, the reliability of the data analyses as a whole could be questionable.

The manipulation check suffers from a lack of significance testing. The categorical data seems to be open- to a chi-square test. I would suggest future work using a scale similar to strongly disagree (1), disagree (2), neither agree nor disagree (3), agree (4), strongly disagree (5), and then do a t-test between groups to determine the strength of the manipulation. Also, one manipulation check question might not be enough. I would suggest two or three for reliability of the measure.

The authors' decision to cut up the selfconsciousness scale baffles me. Conceptually an eloquent argument was made for self-consciousness, then some items were picked from the scale and called self-reflectiveness, but the authors continue to refer to self-consciousness. I take issue here with the usefulness of the information in Table 1. A major question is what percent of the variance is accounted for by the first and second factor? If the first factor accounts for most of variance, then a unidimensional scale of self-consciousness exists. Conceptually, I do not see any difference between self-reflectiveness and internal state awareness. I see no content validity in saying that the item "I sometimes have the feeling that I'm off somewhere watching myself" represents the extent to which individuals examine their motivations and not awareness of feeling states, while the item "I'm aware of the way my mind works when I work through a problem" represents awareness of feeling states and not the extent to which individuals examine their motivations. The point I am trying to make here is that the authors have fiddled with the self-consciousness scale without proving they have done so in a valid and reliable manner. This raises the question in my mind "did the researchers use the full scale and find it didn't work, then scramble to data massaging to 'prove' their hypothesis?"

The authors do not tell us the median score that was used to divide the sample into high and low consciousness. Was it two or three on this five-point scale? Are people who scored two significantly different than people who scored three? Given this split and the likely unequal and small cell sizes, the authors report some strong effects on their dependent variable. Besides the significance of the independent variables, the authors need to report the amount of variance (omega squared) accounted for by each variable. This would give the reader a much better indication of the importance of order selfconsciousness and message when measuring values.

Despite the methodological flaws, I really liked this paper as it had a strong conceptual argument. I feel the topic of 'values' will be hot for consumer researchers, mainly due to shifting values of consumers and how that shift reflects on their consumption patterns, for example, the success of green products in grocery stores. Some major issues to address in this line of research are the stability of manipulated values and interaction of overt behavior. If we can easily manipulate values of some people, what is the effect over time? Once a value is manipulated some way, can it be manipulated back to the original point with further communication? Does there have to be an overt commitment to the value via behavior to stabilize the value? We need more studies over long time periods to shed light on the above points. In conclusion, I would encourage the authors to continue research in this area.

REFERENCES

Higie, Robin A. and Lawrence F. Feick (1989), "Enduring Involvement: Conceptual and Measurement Issues," in Advances in Consumer Research, Vol. 16, Tom Srull (ed.), Association for Consumer Research, 690-696.

Laurent, Gilles and Jean-Noel Kapferer (1985), "Measuring Consumer Involvement Profiles," Journal of Marketing Research, Vol. 22, (February), 41-53.

McQuarrie, Edward F. and J. Michael Munson (1987), "The Zaichkowsky Personal Involvement Inventory: Modification and Extension," in Advances in Consumer Research, Vol. 14, P. Anderson and M. Wallendorf (eds.), Association for Consumer Research, 36-40.

Nunnally, Jum C. (1978), Psychometric Theory, 2nd edition, McGraw-Hill, New York.

Ratchford, Brian T. (1987), New Insights About the FCB Grid," Journal of Advertising Research, (August/September), 24-28.

Zaichkowsky, Judith Lynne (1985a), "Measuring the Involvement Construct," Journal of Consumer Research, Vol. 12, (December), 341-352.

Zaichkowsky, Judith Lynne (1985b), "Familiarity: Product Use, Involvement or Expertise?" in Advances in Consumer Research, Vol. 12, Elizabeth C. Hirschman and Morris B. Holbrook (eds.), Prove, UT: Association for Consumer Research, 296-299.

Zaichkowsky, Judith Lynne (1987), "Emotional Aspects of Product Involvement," in Advances in Consumer. Research, Vol. 14, M. Wallendorf and P.E. Anderson (eds.), Ann Arbor: Association for Consumer Research, 32-35.

----------------------------------------