A Multivariate Test of Cad Instrument Construct Validity

ABSTRACT - Cohen's CAD instrument was examined for two specific validation issues through the use of principal components and factor analysis. Although the instrument did not meet the validation requirements, the results indicate that a firm foundation has been established for future development that may lead to a valid instrument.



Citation:

Michael J. Ryan and Richard C. Becherer (1976) ,"A Multivariate Test of Cad Instrument Construct Validity", in NA - Advances in Consumer Research Volume 03, eds. Beverlee B. Anderson, Cincinnati, OH : Association for Consumer Research, Pages: 149-154.

Advances in Consumer Research Volume 3, 1976      Pages 149-154

A MULTIVARIATE TEST OF CAD INSTRUMENT CONSTRUCT VALIDITY

Michael J. Ryan, The University of Alabama

Richard C. Becherer, Wayne State University

ABSTRACT -

Cohen's CAD instrument was examined for two specific validation issues through the use of principal components and factor analysis. Although the instrument did not meet the validation requirements, the results indicate that a firm foundation has been established for future development that may lead to a valid instrument.

INTRODUCTION

A common criticism of consumer research is the use of unvalidated instruments to generate data (Kollat, Engel, and Blackwell, 1970; Robertson and Ward, 1972). Until recently (Davis, 1971; Becherer, Bibb, and Riordan, 1973; Heeler and Ray, 1974; Horton, 1974) research investigating the crucial problems of test reliability and validity have been absent from the marketing literature. In a review of attempts to relate purchasing behavior to personality, Kassarjian (1971) concluded that measures commonly employed were generally intended for other applications and inadequate for the intended purposes. He suggested that consumer behavior researchers develop and validate their own instruments to measure the personality variables that go into the purchase decision. In line with this thinking, Cohen (1967, 1968) developed the CAD scales, an instrument designed specifically to diagnose interpersonal orientations useful in predicting and explaining consumer behavior.

Some success has been achieved in the initial applications of the instrument (Cohen, 1967; Cohen, 1968; Kernan, 1971) and both Kassarjian (1971) and Heeler and Ray (1974) in their inclusive reviews of the literature have cited the CAD as worthy of further investigation. While admittedly picking and choosing from the data, Cohen's (1967, 1968) initial published research linked each of the basic response orientations to specific patterns of product or brand usage as well as television and magazine preferences. Other encouraging results with the CAD were evidenced by Kernan's (1971) findings which suggest relationships between compliant, aggressive, and detached orientations and such things as information source utilization and fashion approval.

These findings must be tempered, however, by the fact that more "positive" results have not been published using the CAD scales. Further, research must be initiated to determine if the successful applications of the instrument are merely artifacts of invalid and unreliable measures. In an attempt to construct something close to a multitrait-multimethod matrix, Heeler and Ray (1974) view their limited CAD validation effort as positive and suggest that the CAD should be examined more extensively. The research reported here continues the validation process by examining an aspect of the instrument's construct validity.

THE CAD INSTRUMENT

Cohen postulated that Horney's (1945) tripartite paradigm would provide a general model of interpersonal response traits useful in predicting a broad range of product purchase decisions. Horney theorized that individuals had three predominant orientations. These were described as compliant, aggressive, and detached.

The compliant individual wants to be part of the activities of others. He wants to be loved, wanted, appreciated, and needed. He sees in other people the solutions to many problems of life, and wants to be protected, helped, and guided. This person tends to be oversensitive to the needs of others, overgenerous, over-grateful, and overconsiderate. This individual tends to avoid conflict and subordinates himself to others. Important attributes are goodness, sympathy, love, unselfishness, and humility.

The aggressive individual wants to excel, to achieve success, prestige, and admiration. Other people are seen as competitors. This type of person strives to be a superior strategist, to control his emotions, and bring his fears under control. Strength, power, and unemotional realism are seen as necessary qualities. People are valued if useful to one's goals. This person seeks to manipulate others, and will go out of his way to be noticed, if such notice brings admiration.

The detached individual wants to put emotional "distance" between hi, elf and others. Freedom from obligations, independence, and self-sufficiency are highly valued. This type of person does not want to be influenced, or share experiences. Intelligence and reasoning are valued instead of feelings and conformity is disliked. This individual considers himself unique, possessing certain gifts and abilities that should be recognized without going out of his way to show others. Generally, this individual is somewhat distrustful of others.

As a first step in applying this notion, Cohen developed a 35 item instrument composed of three sets of scales designed to measure Compliant, Aggressive, and Detached interpersonal orientations. Each of the 35 items was followed by a six place bipolar adjective set ranging from "extremely desirable" to "extremely undesirable." These items were scored from one to six, with ten items included for each of the Compliant and Detached scales (C1 through C10 and D1 through D10) and fifteen items included on the Aggressive scale (A1 through A15). Each scale is scored by summing across items, with high scores (those at least one standard deviation above the sample mean for a trait) indicating the respondent's orientation toward one of the three groups. The individual questions are reproduced in the Appendix.

THE PROBLEM OF CONSTRUCT VALIDITY

Since the purpose of the CAD instrument is to measure three separate constructs, it should have demonstrated validity for this purpose. That is, it should have construct validity. Nunnally (1967) outlines three major steps in this process: 1) specifying the domain of observables; 2) determining to what extent all, or some, of these observables correlate with one another, and 3) determining whether or not one, some, or all measures of such variables act as though they measured the construct.

The first step generally involves defining the construct as it relates to words at a lower level of abstraction. Horney initiated this step and Cohen has employed some of these words to construct the test items. Cohen (1968) has also furnished evidence for this aspect in terms of face validity.

The third aspect is a matter of empirical testing to determine if operationally defined constructs are related to valid measures of other similar constructs and do not relate to valid measures of dissimilar constructs. Cohen (1967) has also furnished evidence that the instrument meets this condition through use of correlational analyses.

The second condition concerns the adequacy with which the domain of observables fit together. That is, do the instrument items relate to each other in a manner consistent with the proposed relationships between the constructs they represent. In an indirect manner, Cohen (1967, 1968) and Kernan (1971) may have furnished some support for this condition. That is, their successes indicate that the measures have operated like the constructs that they represent were expected to operate. This may indirectly indicate that the items fit together. However, in other research the measures have not operated as expected (Cohen and Golden, 1972). Consequently, a direct test is needed.

Two demonstrations are needed to furnish direct evidence that the instruments fulfill the second condition. First, the items composing each of the three scales should correlate highly with one another. This indicates that the items in a particular scale all measure much the same thing. Second, the items composing the total instrument should split into three groups such that members of each group correlate highly with one another and correlate much less with the members of other groups. This indicates that three distinct things are being measured. Strictly speaking, the issues so far addressed concern reliability rather than validity since the 'things' referred to have not yet been empirically identified. To the extent that empirical groupings are congruent with those conceptually specified on an a priori basis, the measurements take on meaning. The first vestige of construct validity is evidenced when measures shown to be reliable are congruent with theoretical expectations. In the present case, if items a priori specified to indicate a construct cluster together but not across groups, the items show evidence of indicating one of the three construct orientations. This study addresses these issues through the use of principal components and factor analysis.

METHODOLOGY

The CAD questionnaire was administered to a sample of 175 undergraduate junior and senior college students located at two Midwestern universities. Since reliability is a necessary prerequisite to validity testing, this aspect of the instrument was examined before proceeding. Cronbach's (1951) coefficient Alpha measure of internal consistency was utilized as a reliability estimate. The reliability estimates were:

Compliant Scale     .724

Aggressive Scale    .680

Detached Scale      .514

These estimates were low enough to attenuate correlations (Nunnally, 1967: 226). This raised the question of whether the scales were unreliable to the extent of hampering the instrument's use in a correlational study.

The size of the item intercorrelation matrix made it difficult to analyze within and across scale correlations. Consequently, the general factor analysis model was employed to resolve the set of 35 items (p) linearly in terms of a small number of factors (m) that could ultimately be identified as indicating the three orientations in question. This model assumes that the total variance of an item is composed of three parts: common, specific, and error variance. Specific and error variance are not separated and are referred to as unique variance. Error variance is assumed to be "unreliable" or random variance which is sample specific for a given item. Common and specific variance are assumed to represent "reliable" or systematic variance likely to be stable from sample to sample. Common variance is shared among items whereas specific variance is item specific.

The possibility of instrument unreliability had implications for choosing principal components or factor analysis as the method of obtaining the linear reduction. Principal components analysis (a factor solution of a correlation matrix with l's on the diagonal) is the method which analyzes the total variance of the items. In matrix form, the principal components model appears as follows:

R   =  F     F'

pxp  psm  mxp

where R is the correlation matrix and F is a matrix of factor loadings. In this case the linear resolution contains all the variance in the correlation matrix.

Since the scales have demonstrated low reliabilities (and therefore low common variance), submitting them to principal components analysis may have led to two general problems. First, if the majority of the variance in the item correlation matrix had been random, it would have been unlikely that variables would have clustered in the factor matrix. Second, if clusters had formed, the factors may have represented correlated error variance. Such factors would have been unlikely to replicate in another sample.

Methods of factor analysis, employing estimates of common variance in the principal diagonal of the correlation matrix, analyze only common variance (that portion of the reliable variance of an item which correlates with other items in the matrix). In matrix form, the factor analysis model appears as follows:

R   =  F      F'  + U

pxp  psm  mxp  pxp

where R and F are as previously described except that R contains communality estimates on the diagonal instead of l's, F only accounts for common variance, and U is a diagonal matrix of unique variances. Since error variance (in this case instrument unreliability) is a subset of unique variance it is removed from the F matrix. That is, the correlational analysis in the linear solution does not consider unreliability.

Since common variance is a subset of reliable variance, the factor analysis method should have produced stable factors and allowed clusters to form irregardless of error magnitude. This method could have been misleading, however, since the factors and clusters it would have produced would not indicate whether the instrument may be ineffective due to unreliability (large measurement error). This would have been indicated by failure to find clusters in the principal components analysis. Consequently, both principal components and factor analysis were employed in this research.

Estimated communalities in the factor analysis were the squared multiple correlations of that item with all other items in the matrix. This method was chosen since it is the lower bound estimate of common variance (Guttman, 1956) and is the most conservative and widely used of the communality estimates.

Following the Kaiser (1960) criterion, only factors with factor contributions (eigenvalues) of 1.0 or greater were retained in the principal components and factor analysis. Although there is some dispute about the appropriateness of this criterion, it was adopted since it is the most widely known and used decision rule for the number of factors problem.

In order to locate the test items with respect to the factors and obtain a stable representation of this location, retained factors were rotated to simple structure (Thurstone, 1947). Since Horney's notion and Cohen's operationalization indicated that the resulting rotated factors would be defined by items related to each other and not to other variables in the matrix, an orthogonal rotation was employed. The Varimax (Kaiser, 1958) solution to the orthogonal rotation was employed since it has proven to yield a good approximation of simple structure.

RESULTS

The principle components factor matrix is presented in Table 1 and the factor analysis is presented in Table 2. Six factors were retained in the principal components solution only one of which accounted for more than 10% of the total instrument variance. The seven factors together only accounted for 39.8% of the total instrument variance. The rotation did not produce simple structure and there were no factor loading patterns evident among the items. That is, the items did not separate into clusters associated with specific factors.

The factor analysis solution produced three rather clearly defined and one extraneous factor. These four factors accounted for 70.3% of the common variance in the instrument. The first three factors accounted for 60.8% of the common variance with each of the first two factors accounting for more than 22% and the third accounting for more than 15% of the total common variance.

The factor loadings indicated three district clusters. All but nine items (A1, A6, A9, A13, A14, C2, D2, D4, D10) grouped together on three distinct factors. Each of the first three clusters was composed of items designed to measure a specific orientation. Ten of the fifteen items designed to measure the aggressive orientation loaded on the first factor, nine of the ten items designed to measure the compliant orientation loaded on the second factor, and seven of the ten items designed to measure the detached orientation loaded on the third factor. These factors were labeled accordingly. On item (D4) designed to measure the detached orientation and two items (A6, A9) designed to measure the aggressive orientation loaded on the fourth factor. This factor was not identified. The remaining six items did not load on any one factor. Three of these items (A1, A13, A14) were designed to measure aggressive orientation, two (D2, D10) to measure the detached orientation, and one (C2) to measure the compliant orientation.

A somewhat weaker argument could be made that item A14 indicated the aggressive trait as expected since it loaded on the aggressive factor and the unknown factor. The same argument could also be made for items D2 and D10 in regard to the detached trait.

TABLE 1

FACTOR LOADING MATRIX: PRINCIPALCOMPONENTS -- VARIMAX ROTATION CONSTRAINED TO FACTORS WITH EIGENVALUES > 1

DISCUSSION

The principal component solution yielded a large number of factors, a small amount of explained variance, and the rotated solution did not approximate simple structure. On the other hand, the factor analysis solution yielded four factors and a large amount of explained common variance. The rotated factor solution approximated simple structure and the majority of items clustered according to design on three factors that appeared to represent Aggressive, Compliant, and Detached orientations.

The principal component results indicated that the majority of the variance in the instrument was either random or specific to the individual items. Since simple structure was not obtained, the random components appeared to be uncorrelated and large enough to preclude item clustering according to the design of the instrument. This supports the unreliability suspicion raised by the low internal consistency estimates.

TABLE 2

FACTOR LOADINGS MATRIX: FACTOR ANALYSIS -- VARIMAX ROTATION CONSTRAINED TO FACTORS WITH EIGENVALUES > 1

The factor analysis results indicated that when only a portion of the reliable (common) variance was analyzed the items generally behaved as expected. These findings, from both the principal components and factor analysis suggest that the domain of observables did not fit together. Although the instrument items related to each other in a consistent manner, these relationships appeared to have been eclipsed by instrument unreliability. This indicates that the instrument did not fulfill the second condition for construct validity.

The findings from this study are limited by the ungeneralizable sample. The instrument may perform differently on different target market segments. In fact Kernan (1971) reported different internal consistency estimates across five separate studies. The average estimate across these studies was .725 and the rank order of the three scales' reliability was generally the same as reported in this study. However, the possibility that the instrument may not operate as designed seems important enough to suggest that an analysis such as carried out in this research is appropriate for particular segments to which the instrument may be applied.

CONCLUSION

The evidence did not indicate that the instrument fulfills the second necessary condition for construct validity. It appears that the instrument has promise but more work is needed. The domain of observables should be enlarged so items can be added to each of the three scales. This is a generally accepted method of increasing scale reliability (Lord and Novick, 1968) that appears feasible since the instrument in its present form is easily completed in less than fifteen minutes. That is, respondent fatigue should not overcome reliability gains resulting from increased questionnaire length. As a first step, reliabilities should be increased until estimates are consistently above .80 so that correlation attenuations will not be a problem (Nunnally, 1967). After this is accomplished construct validation tests should again be undertaken.

The study also indicates that reliance on factor analysis in validation studies may produce misleading findings. Factor analysis, unlike principal components, does not consider instrument reliability which is a necessary condition for validity. The evidence suggests that utilization of both techniques may allow a researcher to ascertain whether reliability levels are low enough to invalidate an instrument.

The construction of a reliable and valid test instrument appears to be a tedious but necessary process. The validation work to date on the CAD scale has provided a foundation for continuation of the validation process. However, the instrument unreliability suggested by the findings in this study pose some problems. For example, Cohen and Golden (1972), in applying the instrument, report that differences in individual interpersonal orientations did not prove to be a significant factor in the acceptance of information from others. This failure to support hypothesized relationships may have been due to attenuations as a result of instrument unreliability. The lack of demonstrated instrument construct validity precludes what could otherwise be interpreted as disconfirmation of the theory since a valid instrument may have supported the relationships. The validation process should continue since it is doubtful that the instrument, in its present form, will allow a researcher to ascertain whether a failure to support or disconfirm hypotheses is the result of an inadequate theory or an inadequate methodology.

APPENDIX

C.A.D. QUESTIONNAIRE INSTRUCTIONS

REFERENCES

Becherer, Richard C., Jon F. Bibb, and Edward A. Riordan, "Spousal Perception of Household Purchasing Influence: A Multiperson-Multiscale Validation," Proceedings, Combined Conference, American Marketing Association, 1973, 289-292.

Cohen, Joel B. "An Interpersonal Orientation to the Study of Consumer Behavior," Journal of Marketing Research, 4 (August 1967), 270-278.

Cohen, Joel B., "Toward an Interpersonal Theory of Consumer Behavior," California Management Review, 10 (Spring 1968), 73-80.

Cohen, Joel B. and Ellen Golden, "Informational Social Influence and Product Evaluation," Journal of Applied Psychology, 56 (February 1972), 54-59.

Cronbach, Lee J. "Coefficient Alpha and the Internal Structure of Tests," Psychometrika, 16 (September 1951), 297-334.

Davis, Harry S. "Measurement of Husband-Wife Influence in Consumer Purchase Decisions," Journal of Marketing Research, 8 (November 1974), 305-312.

Guttman, Louis," 'Best Possible' Systematic Estimates of Communalities," Psychometrika, 21 (September 1956), 273-285.

Heeler, Roger M. and Michael L. Ray, "Measure Validation in Marketing," Journal of Marketing Research, (November 1974), 361-370.

Homey, Karen, Our Inner Conflicts. New York: W.W. Norton and Co., Inc., 1945.

Horton, Raymond L. "The Edwards Personality Preference Schedule and Consumer Personality Research," Journal of Marketing Research, 11 (August 1974), 335-337.

Kaiser, Henry F. "The Application of Electronic Computers to Factor Analysis," Educational and Psychological Measurement, 21 (Spring 1960), 141-151.

Kaiser, Henry F. "The Varimax Criterion for Analytic Rotation in Factor Analysis," Psychometrika, 23 (September 1958), 187-200.

Kassarjian, Harold H. "Personality and Consumer Behavior: A Review," Journal of Marketing Research, 8 (November 1971), 409-418.

Kernan, Jerome B. "The CAD Instrument in Behavioral Diagnosis," in D. M. Gardner, ed., Proceedings. The Association for Consumer Research, 1971, 307-312.

Kollat, David T., James E. Engel, and Roger D. Blackwell, "Current Problems in Consumer Behavior Research," Journal of Marketing Research, 7 (August 1970), 327-332.

Lord, Frederic M. and Melvin R. Novick, Statistical Theories of Mental Test Scores. Reading, Massachusetts: Addison-Wesley Publishing Company, 1968.

Nunnally, Jum C. Psychometric Theory. New York:' McGraw-Hill, 1967.

Robertson, Thomas S. and Scott Ward, "Toward the Development of Consumer Behavior Theory," Proceedings. American Marketing Association, 1972, 57-64.

Thurstone, Louis L. Multiple Factor Analysis. Chicago: University of Chicago Press, 1947.

----------------------------------------

Authors

Michael J. Ryan, The University of Alabama
Richard C. Becherer, Wayne State University



Volume

NA - Advances in Consumer Research Volume 03 | 1976



Share Proceeding

Featured papers

See More

Featured

My Experience or My Expectations: The Effect of Expectations as Reference Points on Willingness to Recommend Experiential Purchases

Stephanie Tully, University of Southern California, USA
Amar Cheema, University of Virginia, USA
On Amir, University of California San Diego, USA
Davide Proserpio, University of Southern California, USA

Read More

Featured

The Price of a Threat: How Social Identity Threat Influences Price Sensitivity

Jorge Rodrigues JACOB, Brazilian School of Public and Business Administration, Brazil
Yan Vieites, Brazilian School of Public and Business Administration, Brazil
Eduardo B. Andrade, FGV / EBAPE
Rafael Burstein Goldszmidt, Brazilian School of Public and Business Administration, Brazil

Read More

Featured

Alternative “Facts”: The Effects of Narrative Processing on the Acceptance of Factual Information

Anne Hamby, Hofstra University
David Brinberg, Virginia Tech, USA

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.