# What Causes Estimation Problems When Analyzing Mtmm Data?

ABSTRACT - In recent years, there have been a plethora of alternative analytic techniques suggested for analyzing MTMM data. Previous research has compared alternative techniques without explicitly considering the possible reasons why some techniques encounter estimation problems (Bagozzi and Yi 1993, Marsh and Bailey 1991). This study attempts to identify what might be causing common estimation problems when using confirmatory factor analysis to models MTMM data. Five alternative factor analysis techniques (correlated errorsCmethods correlated, correlated uniquenessCmethods uncorrelated, fixed errors, multiplicative, and the Rindskopf parameterization) were used to test for identification problems, multiplicative trait-method relationships, sampling error, true errors close to zero, or over-fitting. None of the recommended factor analysis techniques worked in most cases. It is suggested that a common cause of estimation problems is the quality of the data rather then the technique used to estimate parameters.

##### Citation:

*
Joseph A. Cote (1995) ,"What Causes Estimation Problems When Analyzing Mtmm Data?", in NA - Advances in Consumer Research Volume 22, eds. Frank R. Kardes and Mita Sujan, Provo, UT : Association for Consumer Research, Pages: 345-353.
*

In recent years, there have been a plethora of alternative analytic techniques suggested for analyzing MTMM data. Previous research has compared alternative techniques without explicitly considering the possible reasons why some techniques encounter estimation problems (Bagozzi and Yi 1993, Marsh and Bailey 1991). This study attempts to identify what might be causing common estimation problems when using confirmatory factor analysis to models MTMM data. Five alternative factor analysis techniques (correlated errorsCmethods correlated, correlated uniquenessCmethods uncorrelated, fixed errors, multiplicative, and the Rindskopf parameterization) were used to test for identification problems, multiplicative trait-method relationships, sampling error, true errors close to zero, or over-fitting. None of the recommended factor analysis techniques worked in most cases. It is suggested that a common cause of estimation problems is the quality of the data rather then the technique used to estimate parameters.

When designing measures, the assessment of validity is of utmost importance (Churchill 1979). The multitraitCmultimethod (MTMM) matrix has become accepted as the preferred way to assess construct validity; and confirmatory factor analysis (CFA) is the most popular way to analyze MTMM data. CFA's elegance and power enticed researchers to quickly adopt the technique and develop specific guidelines for its application (Widaman 1985). Yet when these guidelines were applied, results were quite ambiguous. In most situations, models failed to converge, contained unreasonable or inconsistent estimates and/or failed to fit the data (Brannick and Spector 1990, Marsh 1989). Since 1985, researchers have jumped from technique to technique in order to solve estimation problems. This has lead to a "technique of the month" mentality. When any problems are encountered with a given estimation technique, a "new, improved" approach is suggested. Unfortunately, researchers have been less than systematic in identifying why a particular technique might be superior. For example, the direct products model was offered to deal with multiplicative models (Browne 1984), yet there was no attempt to determine if multiplicative relationships were common. Researchers have simply compared a set of alternative methods and then concluded that their suggested approach is best. When that "best" approach ultimately results in estimation problems, researchers simply move to the next "superior" approach. The purpose of this study is to take a more systematic look at potential causes of estimation problems when using confirmatory factor analysis to analyze MTMM data.

Numerous explanations for estimation problems have been proposed, but past discussion has centered around four possible explanations, error component close to boundary values, under identification, multiplicative models, and over fitting. Heywood cases and other boundary estimates are the most commonly encountered problems when estimation factor analysis models. Heywood cases were originally thought to be caused by true values close to the boundary combined with sampling fluctuations (Dillon, et al. 1987, van Driel 1978). Heywood cases were commonly dealt with by fixing the offending estimate (Cote and Buckley 1987). A more appropriate way to deal with true scores close to the boundary is to use Rindskopf's parameterization which eliminates the possibility of Heywood cases by fixing error variances (Rindskopf 1983, Rindskopf 1984). This leads to the first hypothesis:

H1 If estimation problems are caused by true values close to the boundary, then the Rindskopf parameterization should correct the problem.

The most recent explanation for estimation problems is under-identification (Brannick and Spector 1990, Kenny and Kashy 1992). Identification problems can occur in two ways, when there is less information than estimated parameters, and when sources of information duplicate each other (Bentler and Chou 1987). Bentler and Chou (1987) suggest that factors with only two loadings (e.g., two methods) are likely to be empirically under identified since the two items share too much information. They also point out that empirical under identification can occur if indicators are highly correlated (Kenny and Kashy 1992). When empirical under identification is encountered, it often results in Heywood cases and non-convergence. Under identification is often tricky to deal with, however, there have been several techniques recommended for solving this problem. The correlated uniqueness model is currently the most accepted way to deal with identification problems (Kenny and Kashy 1992, Marsh 1989). Unfortunately, the correlated uniqueness technique recommended by Marsh (1989) and tested in the literature (Bagozzi and Yi 1993, Kenny and Kashy 1992) do not account for correlated methods. Equality constraints can be used to allow for correlated methods (Cote and Greenberg 1990), although they increase the number of parameter estimates and impose different restrictive assumptions. Another suggestion for dealing with identification problems is to fix the error variance to estimated values (Marsh and Hocevar 1983). Fixing the error variance to estimated unique variance should help control empirical under identification problems since it reduces the number of parameter estimates. This leads to the next set of hypotheses:

H2 The correlated uniqueness model (methods uncorrelated) should be appropriate if empirical under identification is the cause of fitting problems.

H3 The correlated uniqueness model (correlated methods) should be appropriate if empirical under identification is the cause of fitting problems.

H4 If empirical under identification is causing fitting problems, then the fixed error variance models should be appropriate.

Another purported cause of estimation problems relates to Campbell and O'Connell's (1967, 1982) claim that traits and methods interact in a multiplicative rather than additive fashion. If multiplicative relationships exist, then traditional CFA models are misspecified, which would likely cause estimation problems (Bagozzi and Yi 1990, Bagozzi and Yi 1991, Lastovicka, et al. 1990). Browne (1990) developed the MUTMUM program for fitting direct product models to MTMM data. This technique assumes that trait and method effects are multiplicative rather than additive. Therefore,

H5 MUTMUM should effectively deal with estimation problems caused by multiplicative MTMM data.

A final cause of estimation problems is over-fitting (Bagozzi and Yi 1990). If a trait only model will fit the data, then it is inappropriate to include method effects. By including method effects, the model is misspecified and as such will lead to estimation problems. This leads to the final hypothesis:

H6 If over-fitting is the cause of estimation problems, then a trait only model should adequately fit MTMM data.

Each of the hypotheses will be tested to determine if the commonly proposed explanations for estimation problems are truly causing the difficulties of analyzing MTMM data.

METHOD

Numerous MTMM matrices were screened on several criteria. First, as recommended by Brannick and Spector (1990) only MTMM matrices with more than two different traits and methods were included. Second, matrices were screened to ensure that traits were properly matched for each method. Third, matrices with low values on the validity diagonal (average < 0.4) were deleted since convergence was not demonstrated. Only data sets with sample sizes greater than (or very close to) 100 were selected (Boomsma 1985). Since we are interested in identifying the cause of estimation problems, matrices were screened out if they fit a traditional block diagonal model without problems. The Arora matrix was included even though the traditional block diagonal model fit since the factor loadings were inconsistent with (much higher than) the reliability estimates. In addition, if the trait only model was appropriate, the data set was dropped from further analysis, since over-fitting would occur if method effects were added (Bagozzi and Yi 1990, Bagozzi and Yi 1991). The trait only model was considered appropriate only when the CFA and Campbell and Fiske criteria agreed that no method effects exist (see Table 1 for studies considered for analysis).

EQS version 4.0 (Bentler 1992) was used to estimate all the additive models, and MUTMUM (Wothke and Browne 1990) was used to estimate the multiplicative models. Common factor analysis was used to generate unique variance estimates for the fixed errors model (if Heywood cases were encountered, principle components analysis was used). When fitting the models, there was no attempt to customize the model specification for any particular data set (Lehmann 1988 some data sets may necessitate alternative specifications). Automatic start values were used to fit the model and a maximum of 500 iterations was specified. Bagozzi and Yi (1990) outline several criteria that can be used to evaluate model appropriateness. Our analysis indicated only several of these were needed to identify estimation problems, the number of parameter estimates held at boundary values, the comparative fit index (RMSEA for MUTMUM), examination of residuals and assessing reasonableness of the estimates.

RESULTS

Despite carefully screening the matrices, estimating problems were common for all the models (see Table 2). Table 2 includes results from the traditional (block diagonal) model in order to show the types of estimation problems encountered. We can start by noting that a trait only model fit five of the 16 studies which met the screening criteria (Elbert, Flamer data set 1, Ostrom, Roberts, and Seymour). In addition to fitting a trait only model, these data sets also passed Campbell and Fiske's test for method effects. This would indicate that over-fitting might be a possible cause for estimation problems in some cases and provides partial support for hypothesis 6. These 5 data sets were eliminated from further study since adding method effects is inappropriate.

In general, there was little support for hypotheses 1 through 6 (see Table 3). The Rindskopf model failed to converge or had boundary values (for factor correlations) for 10 of the 11 data sets which contain method effects. Only the estimation problems for the Arora data set seemed to be due to random error components close to boundary values (see table 2). This would indicate that boundary values do not seem to be a primary cause of estimation problems (H1 not supported).

The correlated errors model (methods correlated) resulted in Heywood cases for all 11 studies. The fixed errors model was appropriate for only three of the 11 data sets (Dunham, Flamer data set 2, and Marsh data set 2). Although it did not solve estimation problems for most cases, the correlated uniqueness model (methods uncorrelated) appears promising, fitting for four of the eleven data sets (Allen, Arora, Meier, and Shavelson). Unfortunately, further examination indicates other problems may exist. The correlated uniqueness model is unable to account for correlated methods, which may cause trait loadings to be inflated. For two data sets both the correlated uniqueness and another model fit the data. A comparison of these results indicate the correlated uniqueness model had consistently higher trait loadings than the other techniques (Arora lCU=0.819, lrindskopf=0.656; Meier lCU=0.815, ldirect products= 0.418). This raises serious doubts about the appropriateness of assuming that methods are not correlated (Fiske and Campbell 1992). In sum, empirical under identification does not appear to be the primary source of estimation problems, although it may cause difficulty in some cases (H2, H3, and H4 marginally supported).

The direct products model seems appropriate only for the Meier data set. The Shavelson data set fit well, but the results seem inappropriate since random error is estimated to be zero, even though the average reliability is 0.76. Therefore, H5 must be rejected. It does not appear that multiplicative trait-method relationships were a common source of estimation problems.

DISCUSSION

In general, it appears that estimation problems in these data sets are not due to empirical under identification, multiplicative trait-method relationships, true errors close to zero, or sampling errors (although for any individual study these may be a problem). As such, there is no clear reason to assume that any given factor analysis technique is superior to another. The claims of previous researchers that the direct products model or correlated uniqueness model are superior to the block diagonal model (Bagozzi and Yi 1990, Bagozzi and Yi 1993, Marsh and Bailey 1991) are not supported by this study. The most surprising finding of this study is that in many cases, no models unambiguously fit the data. The results of this study beg the question, "what causes estimation problems when analyzing MTMM data?"

Multicollinearity

An often ignored problem in confirmatory factor analysis is multicollinearity. Multicollinearity can cause unstable estimates in analysis of covariance structures (Bentler and Chou 1987, Lehmann and Gupta 1989). With MTMM data, multicollinearity can exist among the traits or among the methods. For example, if the traits are highly correlated, then multicollinearity exists. Multicollinearity is commonly identified using three indicators, 1) change of estimates with minor changes in the model (such as dropping items or changing start values), 2)-significance tests lead to conflicting conclusions, and 3) model coefficients have inappropriate signs (such as inconsistent patterns in the trait and method factor correlations). These conditions are commonly found when analyzing MTMM data (Bagozzi and Yi 1990). In fact, changing start values and dropping methods or traits are commonly used to beat MTMM data into submission. While there may be many other causes of estimation problems, multicollinearity is a likely candidate. Mason and Perreault (Mason and Perreault 1991) note that multicollinearity is not a problem unless the sample size is small, overall R squared is low, and collinearity exceeds 0.65. While appropriate for regression analysis, these guidelines may understate the problem for confirmatory factor analysis of MTMM data, especially when both trait intercorrelations and method intercorrelations are high (Fiske and Campbell 1992). In such a case, a heterotrait-heteromethod correlation may be due to either correlated traits, correlated methods, or both. It therefore becomes very difficult to estimate the exact source of the correlation. To deal with the problem, researchers need to follow Campbell and Fiske's recommendation to use dissimilar traits and methods when collecting MTMM data.

STUDIES CONSIDERED FOR ANALYSIS

CONFIRMATORY FACTOR ANALYSIS RESULTS

Data Problems

Since the computation goal of confirmatory factor analysis is to replicate the target covariance matrix, any data problems that affect the nature of the covariance matrix will affect the results of the factor analysis. Two possible causes of data problems are outliers and missing values. Bollen (1987) found that outliers can cause estimation problems. By dropping severe outliers he was able to eliminate Heywood cases. Bollen (1987) also discusses approaches for identifying and dealing with outliers. In addition, the treatment of missing values often can frustrate researchers. The method used to deal with missing data (e.g., pairwise versus listwise deletion) can affect the nature of the covariance matrix. For example, missing data in the Arora data set is a possible explanation for why the correlation between two items is greater than the reliability estimates of either (an inappropriate situation). Several techniques exist for dealing with missing data problems so that inappropriate or problem covariance matrices do not occur (Stewart 1982, Timm 1970). Lastly, Bagozzi and Yi (1993) suggest that estimation problems may arise if the reliability of the various items are drastically different.

Violation of Assumptions

Confirmatory factor analysis assumes that observations are identically distributed (Bentler and Chou 1987). In other words, each observation is assumed to have the same factor structure. It is quite possible that individuals may differ on their reactions to various methods. For example, some people may have a constant tendency bias for a particular scale while others may not. The extent to which this assumption complicates the analysis of MTMM data is unknown. It is also unclear how it might be tested for and corrected if it did exist. Clearly more research is needed on the nature of method effects if we are to address this possible problem.

Misspecification of the Model

Misspecification of MTMM models can occur in numerous ways. One of the most problematic is the specification of method effects themselves. It is generally assumed that systematic measurement error is captured by the different methods. However, this is not always the case (Brannick and Spector 1990, Marsh and Hocevar 1988). Systematic method effects may be represented by such things as halo effects or social desirability which can occur for some traits, but not others. Relatedly, the measurement error can occur at the item level rather than the measurement scale level as when bias is introduced by wording used in all the methods. In all these cases, the traditional specification of method effects would be inaccurate. The best way to deal with these problems is to use a second order factor model (Gerbing and Anderson 1984, Marsh and Hocevar 1988). Because the exact nature of the method effects is not specified, the second order factor model can account of various types of method effects. In addition, it allows diagnostics at the item rather than the scale level (Gerbing and Anderson 1984, Marsh and Hocevar 1988).

CONCLUSIONS

There are a number of problems which may limit the applicability of confirmatory factor analysis for analyzing MTMM data. While some researchers have claimed there may be fundamental flaws with the application of additive confirmatory factor analysis (block diagonal model) to MTMM data (Bagozzi and Yi 1990, Bagozzi and Yi 1991), it is more likely that the specific data to which we apply the technique is the fundamental root of estimation problems. Confirmatory factor analysis of MTMM data is powerful when applied appropriately. But we cannot ask more of the procedure than it can realistically deliver.

There are some simple guidelines that researchers can follow that will minimize estimation problems. First, make certain that the correlation matrix appears reasonable. Start by calculating reliabilities and make certain the correlation between two variables don't exceed the reliabilities of those variables. This situation would indicate corrections should be made to the data before MTMM analysis begins. In addition, Campbell and Fiske's criteria for analyzing MTMM matrices can be used to identify potential problems (e.g., size of validity diagonal values relative to other values, lack of discrimination, etc.). The data sets should be examined for outliers using the techniques recommended by Bollen (1987). Even a single outlier can cause severe estimation problems. When missing values exist, alternative methods for estimating covariance should be considered (Stewart 1982). Lastly, there may be a significant amount of multicollinearity present in the data which may result in instability of estimates when using confirmatory factor analysis. Campbell and Fiske's requirement of maximally dissimilar methods and discriminant traits appears to also hold true for confirmatory factor analysis (albeit to a lesser degree).

SUMMARY OF MODEL APPROPRIATENESS

Researchers may also want to consider using second order factor analysis (Marsh and Hocevar 1988). This technique has not received wide application, and as such is untested. However, it does provide additional diagnostic information about each item in the analysis and has less restrictive assumptions about the nature of method effects. If a CFA model results in estimation problems, the second order factor model might provide the diagnostics to identify the cause of the problem.

Lastly, we clearly need to more carefully consider the nature of method effects. It is quite possible that our current view of the nature of method effects is flawed (Fiske and Campbell 1992). Either method effects do not exist as we currently view them, or the nature of the relationship between methods and traits is different than the additive or multiplicative models.

APPENDIX

SOURCES OF DATA

Allen, Jon G., and J. Herbert Hamsher (1974), "The Development and Validation of a Test of Emotional Styles", Journal of Consulting and Clinical Psychology, 42(5),663-8.

Arora, Raj (1982), "Validation of an S-0-11 Model for Situation, Enduring and Response Components of Involvement", Journal of Marketing Research, 19 (November), 505-16.

Dunham, R., F. Smith, and R. Blackburn (1977), "Validation of the Index of Organizational Reactions with the JDI, MSQ, and the Faces Scales, "Academy of Management Journal, 20, 420-432.

Elbert, N. (1979), "Questionnaire Validation by Confirmatory Factor Analysis: An Improvement Over Multitrait-Multimethod Matrices, "Decision Sciences, 10 629-44.

Flamer, Stephen (1983), "Assessment of the MTMM Matrix Validity of Likert Scales Via Confirmatory Factor Analysis, "Multivariate Behavioral Research, 18 (July), 275-308.

Freedman, Richard D., and Stephen A. Stumpf (1978), "Student Evaluations of Courses and Faculty Based on a Perceived Naming Criterion: Scale Construction, Validation, and Comparison of Results", Applied Psychological Measurement, 2 (Spring), 189-202.

Hicks, Jack M. (1067), "Comparative Validation of Attitude Measures by the Multitrait-Multimethod Matrix", Educational and Psychological Measurement, 27, 985-95.

Kothandapani, (1971), "Validation of Feeling, Belief, and Intention to Act as Three Components of Attitude and Their Contribution to Prediction of Contraceptive Behavior," Journal of Personality and Social Psychology, 19 (September) 321-33.

Marsh, Herbert W. and Butler (1984), "Evaluating Reading Diagnostic Tests: An Application of Confirmatory Factor Analysis to MTMM Data," Applied Psychological Measurement, 8 (3), 307-20.

Meier. S. (1984), "The Construct Validity of Burnout," Journal of Occupational Psychology, 57, 211-219.

Ostrom, (1969), "The Relationship Between the Affective, Behavioral, and Cognitive Components of Attitude", Journal of Applied Social Psychology, 5, 12-30.

Roberts, Mary Ann, et a]. (1981), "A Multitrait-Multimethod Analysis of Variance of Teachers' Ratings of Aggression, Hyperactivity, and Inattention", Journal of Abnormal Child Psychology, 9 (3), 371-80.

Seymour, Daniel and Greg Lessne (1984), "Spousal Conflict Arousal: Scale Development", Journal of Consumer Research, 11(3),81021.

Shavelson, Richard J., and Roger Bolus (1982), "Self Concept: The Interplay of Theory and Methods, "Journal of Educational Psychology, 74 (1), 3-17.

REFERENCES

Bagozzi, Richard P. and Youjae Yi (1990), "Assessing Method Variance in Multitrait-Multimethod Matrices: The Case of Self-Reported Affect and Perceptions at Work," Journal of Applied Psychology, 75 547-61.

Bagozzi, Richard P. and Youjae Yi (1991), "Multitrait-Multimethod Matrices in Consumer Research," Journal of Consumer Research, 17 (March), 426-39.

Bagozzi, Richard P. and Youjae Yi (1993), "Multitrait-Multimethod Matrices in Consumer Research: Critique and New Developments," JCP, 2 (2), 143-70.

Bentler, Peter M (1992), EQS, Los Angeles: BMDP Statistical Software, Inc.

Bentler, Peter M. and Chih-Ping Chou (1987), "Practical Issues in Structural Modeling," Sociological Methods and Research, 16 (August), 78-117.

Bollen, Kenneth A. (1987), "Outliers and Improper Solutions," Sociological Methods and Research, 15 (May), 375-84.

Boomsma, Anne (1985), "Nonconvergence Improper Solutions and Starting Values in LISREL Maximum Likelihood Estimation," Psychometrika, 50 (2), 229-42.

Brannick, Michael T. and Paul E. Spector (1990), "Estimation Problems in the Block-Diagonal Model of the Multitrait-Multimethod Matrix," Applied Psychological Measurement, 14 (December), 325-339.

Browne, Michael W. (1984), "The Decomposition of Multitrait-Multimethod Matrices," British Journal of Mathematical and Statistical Psychology, 37 1-21.

Campbell, Donald T. and Edward J. O'Connell (1967), "Methods Factors in Multitrait-Multimethod Matrices: Multiplicative Rather than Additive?," Multivariate Behavioral Research, 2 (October), 409-26.

Campbell, Donald T. and Edward J. O'Connell (1982), "Methods as Diluting Trait Relationships Rather than Adding Irrelevant Systematic Variance," in D. B. a. L. H. Kidder (ed), Forms of Validity Research, San Francisco: Jossey Bass Inc., 93-110.

Churchill, Gilbert A. (1979), "A Paradigm for Developing Better Measures of Marketing Constructs," Journal of Marketing Research, 16 (February), 64-73.

Cote, Joseph A. and M. Ronald Buckley (1987), "Estimating Trait, Method, and Error Variance: Generalizing Across Seventy Construct Validation Studies," JMR, 26 (August), 315-9.

Cote, Joseph A. and Robert Greenberg (1990), "Systematic Measurement Error and Structural Equation Models," ACR, 17 426-33.

Dillon, William R., Ajith Kumar and Narandra Mulani (1987), "Offending Estimates in Covariance Structure Analysis: Comments on the Causes of and Solutions to Heywood Cases," Psychological Bulletin, 101 (January), 126-35.

Fiske, Donald W. and Donald T. Campbell (1992), "Citations Do Not Solve Problems," Psych Bull, 112 (3), 393-5.

Gerbing, David W. and James C. Anderson (1984), "On the Meaning of Within-Factor Correlated Measurement Errors," Journal of Consumer Research, 11 (June), 572-80.

Kenny, David A. and Deborah A. Kashy (1992), "Analysis of the Multitrait-Multimethod Matrix by Confirmatory Factor Analysis," Psych Bull, 112 (1), 165-72.

Lastovicka, John L., Jr. John P. Murry and Eric Joachimsthaler (1990), "Evaluating the Measurement Validity of Lifestyle Topologies With Qualitative Measures and Multiplicative Factoring," Journal of Marketing Research, 27 (February), 11-23.

Lehmann, Donald R. (1988), "An Alternative Procedure for Assessing Convergent and Discriminant Validity," Applied Psychological Measurement, 12 (December), 411-23.

Lehmann, Donald R. and Sunil Gupta (1989), "PACM: A Two Stage Procedure for Analyzing Structural Models," Applied Psychological Measurement, 13 (3), 301-21.

Marsh, Herbert and Michael Bailey (1991), "Confirmatory Factor Analysis of Multitrait-Multimethod Data: A Comparison of Alternative Models," Applied Psychological Measurement, 15 (March), 47-70.

Marsh, Herbert W. (1989), "Confirmatory Factor Analysis of Multitrait-Multimethod Data: Many Problems and a Few Solutions," Applied Psychological Measurement, 13 (December), 335-61.

Marsh, Herbert W. and Dennis Hocevar (1983), "Confirmatory Factor Analysis of Multitrait-Multimethod Matrices," Journal of Educational Measurement, 20 (Fall), 231-48.

Marsh, Herbert W. and Dennis Hocevar (1988), "A New More Powerful Approach to Multitrait-Multimethod Analyses: Application of Second-Order Confirmatory Factor Analysis," Journal of Applied Psychology, 73 (1), 107-17.

Mason, Charlotte H. and William D. Perreault (1991), "Collinearity Power and Interpertation of Multiple Regression Analysis," Journal of Marketing Research, 28 (August), 268-80.

Rindskopf, David (1983), "Parameterizing Inequality Constraints on Unique Variances in Linear Structural Models," Psychometrika, 48 73-83.

Rindskopf, David (1984), "Structural Equation Models: Empirical Identification Heywood Cases and Related Problems," Sociological Methods and Research, 13 (August), 109-19.

Stewart, David W. (1982), "Filling the Gaps: A Review of the Missing Data Problem," in W. O. B. Bruce J. Walker William R. Darden, Patrick W. Murphy, John R. Nevin, Jerry C. Olson and Barton A. Weitz (ed), An Assessment of Marketing Thought and Practice, Chicago: American Marketing Association, 395-9.

Timm, N. H. (1970), "The Estimation of Variance-Covariance and Correlation Matrices from Incomplete Data," Psychometrika, 35 (December), 417-37.

van Driel, Otto (1978), "On Various Causes of Improper Solutions in Maximum Likelihood Factor Analysis," Psychometrika, 43 (June), 225-43.

Widaman, Keith F. (1985), "Hierarchically Nested Covariance Structure Models for Multitrait-Multimethod Data," Applied Psychological Measurement, 9 (March), 1-26.

Wothke, Werner and Michael W. Browne (1990), "The Direct Product Model for the MTMM Matrix Parameterised as a Second Order Factor Analysis Model," Psychometrika, 55 (June), 255-62.

----------------------------------------

##### Authors

Joseph A. Cote, Washington State University

##### Volume

NA - Advances in Consumer Research Volume 22 | 1995

##### Share Proceeding

## Featured papers

See More#### Featured

### Improving Customer Satisfaction Online through Valence Matching

Hannah Perfecto, Washington University, USA

Leif D. Nelson, University of California Berkeley, USA

#### Featured

### When do people learn more from others’ prosocial behavior? A meta-analysis of prosocial modeling effect

Haesung Annie Jung, University of Texas at Austin, USA

Eunjoo Han, Auckland University of Technology, New Zealand

Eunjin Seo, Texas State University

Marlone Henderson, University of Texas at Austin, USA

Erika Patall, University of Southern California, USA

#### Featured

### Enhancing the Effectiveness of Narratives Among Vaccine-Skeptical Parents

Sandra Praxmarer-Carus, Universität der Bundeswehr München

Stefan Wolkenstoerfer, Universität der Bundeswehr München