# The Effect of Pair Similarity on Dollarmetric Profile Comparisons

^{[ to cite ]:}

Joel Huber and David Sheluga (1980) ,"The Effect of Pair Similarity on Dollarmetric Profile Comparisons", in NA - Advances in Consumer Research Volume 07, eds. Jerry C. Olson, Ann Abor, MI : Association for Consumer Research, Pages: 134-139.

^{[ direct url ]:}

http://acrwebsite.org/volumes/9663/volumes/v07/NA-07

A set of dollarmetric profile comparisons were divided into two groups according to the similarity of the individual pairs. It was found that the dissimilar pairs, differing on at least three components, resulted in lower individual utility values that were less likely to be statistically significant. Task simplification is proposed to account for this result.

INTRODUCTION

A profile comparison involves the relative evaluation of stimuli that differ across a number of components or factors. In a dollarmetric profile comparison respondents provide the dollar difference in value between a pair of profile descriptions, and in doing so an estimate of how much money could make the respondents indifferent between the pair. The idea of creating a scale based on dollar differences between products was first developed by Pessemier and Teach (1970 out of earlier analytic developments in paired comparisons by Bechtel (1967) and ScheffT (1952).

From a series of dollarmetric conjoint comparisons, estimates of the partworth values of the different component levels can be derived. Thus, the derivation of partworth values is analogous to more typical conjoint studies Green and Srinivasan 1978). Dollarmetric comparisons differ from conjoint measurement in two respects, however. First, the comparisons are made on pairs of profiles rather than on whole sets. Second, the respondents are asked to specify the degree of preference in dollars rather than merely stating which is preferred.

If pair dissimilarity is defined in the restrictive sense as the number of dimensions on which a pair differ (rather than the distance on those dimensions), then pair dissimilarity will be related to the difficulty a respondent might have in the dollarmetric profile comparisons. That is, to __conscientiously__ determine the value difference of a pair differing on many dimensions, a respondent must evaluate the dollar difference on each component and then aggregate across components. Of course, a respondent may not want or even be able to perform such calculations. Indeed, the difficulty of the task may produce, as one commentator observed, "the possibility of information overload and the resulting temptation on the part of the respondent to simplify the experimental task by ignoring variations in the less important factors or by simplifying the factor levels themselves." (Green and Srinivasan 1978, p. 108). While the above comment refers specifically to the difficulty respondents might have with a full profile ranking task, the same logic applies to dollar-metric comparisons. That is, pairs differing on many components should provide a more difficult task causing respondents to engage in some degree of task simplification.

DOLLARMETRIC CONJOINT COMPARISON TASK ILLUSTRATING PAIR DIFFERING ON ALL 6 COMPONENTS

This study examines the effect of pair similarity on the output of the analysis of dollarmetric profile comparisons. The data comes from 43 student subjects evaluating cameras differing on six binary dimensions. For any given pair the number of components differing varied from two to six. Figure 1 illustrates the task and the various components for a very dissimilar pair. Respondents were asked which camera was preferred, then asked to estimate how much more the preferred camera would have to cost before they would switch preferences. This task continued for 32 pairs generated by an incomplete cyclic design (John, et. al. 1972) on a fractional array of 16 of the 64 possible cameras. Details of the design are provided in Sheluga (1978).

DESIGN MATRICES (X_{D}) FOR INPUT TO CAMERA STUDY

To study pair similarity the 32 observations were broken into two groups of 16 pairs shown in Table 1. The high similarity pairs differed by an average of 2.5 components while the low similarity pairs differed by an average of 4.1 components. Since both groups included pairs with the median level of similarity, "3," these pairs were randomly allocated to the two conditions so that there would be 16 observations in each group. It should be emphasized that although these pairs are treated as different conditions for the purposes of analysis, they were well merged during the original task.

Since the assumptions of procedures for estimating component utility scores from such data are not well known, they are discussed in the next section. The paper then examines the following questions:

1. What is the effect of pair similarity on the efficiency of the design?

2. What is the effect of pair similarity on subjects' responses: their internal consistency, their estimated partworth values, and the error about these estimates.

3. What is the effect of pair similarity on the statistics associated with these individual OLS analyses? Are there differences in overall indices of fit (R

^{2}, F-test) or in the significance of the individual utility values?

Finally, the paper considers various models of respondent behavior that could account for the results.

ANALYSIS

Separate estimates of the partworth values of the camera components were derived for both the high and the low similarity pairs of each subject. The estimations involved least-squares regressions with dollar differences predicted by stimulus component differences. This estimation method relates to the direct dummy regression of component levels on stimulus valuation in a simple way. Assume there is an additive relation between the dollar valuation of an item and its components as:

V = X'B + e. (1)

Here, __V__ is a vector of dollar values for each composite stimulus, __X__ is a matrix each of whose rows indicates the component levels of the stimulus, __B__ is a vector of the dollar values of the components, and __e__ is a vector of residuals. If (1) holds then dollar differences in valuation for any two stimuli denoted by __i__ and __j__ is

V_{i} - V_{j} = (X_{i} - X_{j}) 'B + e_{i} + e_{j}, (2)

or more simply,

V_{d} = X_{d}'B + e_{d}, (3)

where the variables are expressed in difference form rather than as individual stimuli. Notice the estimates of the __B__ will be identical in (1) and (3) provided valuation is a linear function of the components. In the camera data, with six binary components, the relationship between the two models is particularly straightforward. The factor difference matrices __X___{d} shown in Table 1 have rows for each stimulus pair. The elements of each row represent component differences having value "+1" if the component is present in first stimulus but not in the second, "-1" if absent in the first but present in the second, and "0.0" if the pair have equal levels with respect to the component.

Using the standard OLS assumptions the component values can be estimated as

B = (X_{d} 'X_{d})^{-1}X_{d} 'V_{d}. (4)

Since the dollar difference provides an absolute scale of preference differences (e.g. no transformation is permitted on the dollar differences without changing their meaning), nonmetric routines need not be used as in other forms conjoint analysis. Furthermore, the coefficients have a direct interpretation as the average dollar value of the high level of a component over its low level. As an example, for the average subject in the present study the electronic flash was worth approximately $7 more than the conventional flash.

The variance-covariance matrix of the estimated component values follows directly from the least squares assumptions. It is

Var (B) = s^{2}_{e} (X_{d} 'X_{d})^{-1}. (5)

To the extent that the analyst is concerned with minimizing the error on the component values, Equation (5) is useful in decomposing the error about the component estimates as a function of the design efficiency of the component differences, (X_{d}'X_{d})^{-1}, and the respondent's error about the additive model, s^{2}. It will be shown for the camera data that increasing pair similarity affects these two elements of error in different directions. It results in smaller errors about the overall additive model but lessens design efficiency.

We first examine the issue of design efficiency stemming from pair similarity and then consider behavioral effects that distort the estimated component values and their accompanying statistics.

EFFECT OF PAIR SIMILARITY ON DESIGN EFFICIENCY

Design efficiency refers to the degree to which a particular selection of pairs permits economical estimates of the desired parameters. Relative efficiency is typically defined as the ratio of the expected parameter variances for each design. In the present case this would be the ratio of the variances about the B's or

Relative Efficiency = s^{2}_{e1}tr(X_{1}'X_{1})^{-1}/s^{2}_{e2}tr(X_{2}'X_{2})^{-1}. (6)

Here, the subscripts refer to two designs, 1 and 2. On the common assumption (to be later relaxed) that the relative error is unrelated to the selection of particular pairs in the design (e.g. s^{2}_{e1} = s^{2}_{e2}), then this expression simplifies to:

Relative Efficiency' = tr(X_{1}'X_{1})^{-1}/tr(X_{2}'X_{2})^{-1}. (7)

Using the above formulation the relative efficiency of a design depends, somewhat surprisingly, on the average similarity of its pairs. Consider two designs shown in Table 2. Both designs define eight pairs on four binary dimensions. In the first panel the pairs differ on only one of the four attributes, while, in the second panel they differ on all attributes. From the inverses of the cross-product of the design matrices the efficiency of the dissimilar pairs is found to be four times that of the similar pairs. Thus, the similar pair design requires four times as many observations to effect to same expected variance.

Generally, for orthogonal designs such as shown in Table 2, with n pairs, each differing on d out of k possible attributes, the variance about the coefficient is s^{2}_{e}(k/nd). Thus the efficiency of a given design is inversely proportional to pair dissimilarity. Intuitively, this is due to the fact that in designs with highly similar pairs, a given component is different (nonzero) only for a subset of pairs. It is only those nonzero pairs that provide any information on the value of that component. By contrast, for highly dissimilar pairs, each observation provides information on all components.

THE EFFECT OF PAIR SIMILARITY ON DESIGN EFFICIENCY

For non-orthogonal designs (where X_{d}'X_{d} is not diagonal) the same general effect of pair similarity on the efficiency of designs is observed. Among the designs for the camera data in Table 1 the similar pairs have an efficiency that is 67% of the dissimilar pairs. Thus, given, equal respondent errors, one would expect the variances about the partworths to be about 33% higher for the similar pairs. However, as detailed in the next section, respondent errors are greater for the dissimilar pairs making it inappropriate to compare designs only by Equation 7. Instead Equation 6 needs to be used.

The Effect of Pair Similarity on Respondents' Judgments

The effect of pair similarity on respondents' judgments was estimated by running separate regressions on the high and low similarity pairs of each subject. We first consider the coefficients of the regressions, their magnitude, significance, and standard errors. Then overall statistics, such as R^{2} and the F-test are discussed.

Many aspects of the split analysis were similar. For example, upon casual examination, the component values (B's) in the high and low pair similarity conditions appeared quite similar. The product-moment correlation between the two sets of coefficients across all 43 respondents and 6 components was r=0.74, not high compared with other conjoint studies (see Green and Srinivasan, 1978), but perhaps reasonable given the inefficiency of the split designs. There were, however three areas where splitting the analysis on pair similarity produced significant and surprising results.

The first noticeable result was a small but significant difference in the mean values of the partworths. This difference was magnified if one compared the magnitudes (absolute values) of the coefficients. As is shown in

Table 3, in the similar pair condition, the average magnitude of the coefficients was significantly higher for three of the six components and overall was about $1.70 higher. Thus it appears that having fewer components on which to make pairwise comparisons resulted in larger partworth values for these components.

Table 3 also shows that in the similar pair condition 22% more of the regression coefficients were significantly different from zero at a 0.05 alpha level. This result is particularly surprising given the fact that the similar pairs had a design efficiency approximately 30% smaller than the dissimilar pairs. One would expect less power, and therefore fewer significant coefficients for the similar pairs; exactly the opposite of what occurred.

A third major difference between the analyses may explain in part this last result. The standard error (s_{e}) about the additive model was $7.10 for the similar pairs and $8.50 for the dissimilar pairs (see Table 4). This 20 percent difference was significant at 0.05 level across the 43 subjects. While the difference in the standard error is not itself sufficient to account for the higher number of significant coefficients, it has an interesting implication about the behavioral effect of pair similarity. If one assumes that the same additive model was used by respondents in both the high and low similarity conditions, (a reasonable assumption given that the two conditions were merged in one task) then it follows that there was greater random error about the judgments on dissimilar pairs. This result could be due to a task simplification strategy (see Wright, 1975), where certain attributes are simply left out of the calculation, or due to errors made in combining the additive differences. For the present it is sufficient to note that the dissimilar pairs produced greater overall error. A model that might account for this lack of internal consistency is considered shortly.

EFFECT OF PAIR SIMILARITY ON SUMMARY REGRESSION STATISTICS

The product of the error of estimate (s^{2}e) and the design efficiency (X_{d}'X_{d})^{-1} is the variance of the component values (s^{2}_{b}). The square root of this statistic is the standard error about the component values. This measures the result of competing effects of respondent inconsistency and design efficiency as one increases the dissimilarity between pairs. If we assume for a moment that the object in a conjoint analysis is solely to minimize the error about the partworth values, then this is the only statistic that is needed. The values for s_{B} are given in Table 3. In the present case, the gain in respondent consistency of the similar pairs condition was not sufficient to offset its loss of design efficiency relative to the dissimilar pairs.

Although the error about the regression was smaller for similar pairs, both measures of variance accounted for, R^{2}, and F-test, as shown in Table 4, were virtually identical across both conditions. This is due to the fact that both statistics are a function of the ratio of the error of estimate __and__ the standard deviation of the original dollar differences. As the standard deviation of the original variables is approximately proportional to the standard error around the regression, the two effects cancel. This result is due to the fact that items having greater physical differences will generally have greater value differences. It illustrates another example, however, of where comparison of models using only R^{2} may result in an oversimplistic conclusion.

DISCUSSION: TWO MODELS OF TASK BEHAVIOR

It has been asserted that managers use implicit models of human behavior in making decisions. One of the roles of research is to make these models explicit and test their validity. In the same way marketing researchers gathering information from respondents have implicit models about the effect of task conditions on the validity of various instruments. In the psychological sciences, Rosenthal and Rosnow (1969) have catalogued and measured the effect of a variety of experimenter and task effects on the outcome of research. It is particularly important for conjoint analysis, being a relatively new but extensively used technique, that the model of respondent reaction to the task be made explicit and tested.

Consider the results of the camera study. Highly similar pairs resulted in smaller error around the overall additive model; they resulted in coefficients of greater size which were more likely to be statistically significant.

If respondents in the dissimilar pairs condition were undergoing information overload (Jacoby, Speller and Kohn, 1974) and were therefore simplifying the task (Wright, 1975), then both the effects of greater error and lower rate of statistical significance can be explained. That is, suppose respondents only processed a limited number of attributes, say two or three, during any comparison so that the other attributes that differ were effectively ignored. Attributes thus left out of the calculation would have an implicit value of zero for that comparison as that attribute would have no effect on the preference difference. Since different attributes would be left out at different times by different subjects, the average magnitude of the resultant coefficients would be lessened accordingly. For similar pairs with less processing required, attributes would be less likely to be left out of the evaluation. Thus, similar pairs should result in both smaller overall error and greater magnitude of the part-worth coefficients.

Given that task simplification enables respondents to cope with processing highly dissimilar pairs, the question still remains as to which attributes are to be left out. One model could be called __strict hierarchical attribute selection__ in which attributes are ranked in terms of importance (value). When a pair is presented the respondent selects the first two or three most important and processes those. The remaining attributes are left out of the calculation and given implicit zero values. By contrast, an alternative pattern of respondence could be called __random hierarchical attribute selection__. In this model, the attributes evaluated during task simplification have a random hierarchy that differs probabilistically across pairs. The probability of inclusion could be uniform, so that all attributes have equal chances of being processed or, more likely, biased, where the probability of being included in the calculation depends directly on the utility of the attribute. This latter model is similar in some ways to Tversky's (1972) elimination by aspects model in that the selection of the attribute is a probability that depends on the utility of that attribute.

The camera preference data can be used to determine if the respondents are using strict attribute selection. If strict selection is the correct model, then the most important attributes would be included in the calculation every time they occur. Since any attribute differs more often in the dissimilar pairs condition, the strict hierarchy model predicts a higher significance for the most important attributes. This is due to the fact that in the similar pairs condition, while one would expect the more important attributes to be included when they appear, they appear less often. Thus the strict hierarchical model predicts the two or three most valued components to have higher levels of significance in the dissimilar pairs condition. Further, components high in the hierarchy are generally included in the affective calculation regardless of pair similarity. Thus they should have coefficients whose magnitude are comparable to those in the similar pairs condition. The data in Table 3 indicate that the most valued attribute across subjects, the electronic flash, does have a slightly higher proportion of significant coefficients for the dissimilar pairs, 74% against 70%, but this difference is not significant. Moreover, the absolute value of the flash is significantly __lower__ in the dissimilar pairs condition, suggesting the attenuation takes place as that attribute is left out of the calculation. Therefore, on the basis of aggregate data the strict attribute selection model must be rejected.

Table 5 confirms this analysis at the individual level. The significance level and average magnitude of the three most important attributes are tabulated for the two conditions. Once again, the percent significant are not statistically different between conditions, while the average magnitude of the three largest coefficients is, thus providing additional evidence that the strict hierarchy model is incorrect.

The camera data support the following "model" of consumer response behavior to a conjoint comparison task. First, the respondent searches for attributes that differ. There is not a fixed hierarchy of attribute consideration but a more random one. That is, the probability a given component is considered in a comparison is a function of such factors as (1) the value of the component, (2) whether that component had been considered in the last comparison and (3) the order in which the respondent happens to scan the profile sheet. Since, however, the respondent has limited processing capacity and, more likely, limited patience, the search and evaluation of differing components ceases after two or three have been processed. This simplification strategy implies that more attributes are left out in the evaluation of dissimilar pairs, thus resulting in higher levels of error and smaller coefficients that are less likely to be significant.

CONCLUSIONS AND IMPLICATIONS

At the outset it must be acknowledged that the above "model" is speculative and should be treated as such until validated. The above model is only one of many that could account for the data found in this study. The data are consistent with the model but do not confirm its validity.

Confirmation of the model would involve different kinds of measurements than were taken here. First it would be helpful to use orthogonal designs as in Table 2 so that pair similarity could be more efficiently controlled. Second, to test whether simplification actually is occurring, a measure of response time for pairs with varying levels of similarity could help determine the degree of processing. For example, it is reasonable to assume that the time needed to adequately process attributes is a linear function of the number of attributes that differ. Suppose, however, that one found marginal processing time decreasing relative to the number of dimensions differing in the pair. That would provide evidence that processing was being truncated.

Finally, one could test whether the same attributes are always processed in the same order by combining eye-tracking methodology with the comparison task (e.g., Russo, 1968). If the same attributes are included in the same order then the scanning pattern should be the same. Systematic shifts in scanning could indicate that the hierarchy had changed.

If the model of respondent behavior described here is accurate, then there are several implications for users of dollarmetric comparisons and related conjoint analysis. One implication is technical and involves the use of OLS to estimate component values. Under the OLS assumptions errors should be uncorrelated and homoskedastic. In the dollarmetric conjoint comparisons for the camera data the error depends on __d__, the number of attributes on which a pair differ. On the assumption that the error is approximately linear with l/__d__, then one could smooth out the heteroskedasticity of the observations by weighting each observation by 1/__d__.

A second implication relates to the issue of whether the two-attribute-at-a-time approach (Johnson, 1974) is superior to the profile method of collecting data for conjoint analysis (see Green and Srinivasan, 1978; Westwood, et al, 1974). When similar pairs of stimuli differ on only two attributes, the task is similar to the two-factor-at-a-time method. The difference is that the latter method only requires a ranking while the dollarmetric approach requires an estimate of a dollar difference. The full profile method, when three or more factors are used, approximates the dollarmetric task in the dissimilar pairs condition. If the above analogies are accepted, then it is likely that the same kinds of simplification strategies that occurred in the camera pairs would occur in multifactor profile methods. That is, error would be higher, coefficients would be lower and fewer would be statistically significant.

Such a possibility could be easily tested by a series of conjoint analyses where the number of factors is systematically increased from a very low level to a much higher one.

Finally with respect to conjoint comparisons, the data here indicate that simplification occurs as the dimensional differences within pairs increases. Whether such simplification results in a biased estimates depends on the degree to which actual choice entails similar overload. For example, many attributes may be traded-off in the relatively involved selection process of purchasing an automobile. Modeling such a process requires a task that does not result in substantial truncation of attributes. In such a context, then, conjoint comparisons with small differences (d < 3) would be preferable to those with greater dissimilarity.

By contrast, for frequently purchased nondurables, such as detergents, it is likely that only a small portion of the available information is used. Conjoint comparisons on more dissimilar pairs reflect this truncation of information in an overload situation and thus provide the appropriate task. Under the above reasoning, then, high involvement consumer decisions should typically be modeled using similar pairs, while low involvement products may be best reflected in the task involving highly dissimilar pairs.

In conclusion, this study has shown that respondents when faced with a task to provide dollar preference differences on pairs of items will produce different results depending on the number of dimensions that differ. In particular, it appears that a simplification strategy limits the number of dimensions that are considered and modifies the coefficients that are produced.

REFERENCES

Bechtel, Gordon G. (1967), "The Analysis of Variance and Pairwise Scaling." __Psychometrika__, 39, 1, (March) 47-65.

Gardner, Meryl P., A.. A.. Mitchell and J. E. Russo, (1978), "Chronometric Analysis: An Introduction and an Application to Low Involvement Perception of Advertisements," __Advances in Consumer Research__, 5, 581-589, Keith Hunt, (ed.) Association for Consumer Research.

Green, Paul E. and V. Srinivasan (1978), "Conjoint Analysis in Consumer Behavior: Issues and Outlook," __Journal of Consumer Research__, 5, (September), 103-123.

Jacoby, Jacob, D. E. Speller and C.A. Kohn (1974), "Brand Choice Behavior as a Function of Information Load," __Journal of Marketing Research__, 11, (February 1974), 63-69.

John, J. A., F. W. Wolock and H.A. David (1972), __Cyclical Designs__, National Bureau of Standards, Applied Mathematics Series, #62.

Johnson, R.M. (1971), "Market Segmentation: A Strategic Tool," __Journal of Marketing Research__, 8, (Feb.), 13-18.

Pessemier, E.A. and R. D. Teach (1970) "Dissaggregation of Analysis of Variance for Paired Comparisons: An Application to a Marketing Experiment," Krannert paper #282, Krannert School of Management, Purdue University, West Lafayette, Indiana.

Rosenthal, Robert and Ralph Rosnow (1969), __Artifact in Behavioral Research__, Academic Press, N.Y.

Russo, J. Edward (1978), "Eye Fixations Can Save the World: A Critical Evaluation and a Comparison Between Eye Fixations and Other Information Processing Methodologies," __Advances in Consumer Research__, 5, 561-570. H. Keith Hunt (ed.) Association for Consumer Research.

ScheffT, R. (1952), "An Analysis of Variance for Paired Comparisons," __Journal of the American Statistical Association__, 47, 381-400.

Sheluga, David (1978), "The Relationship of Product Preference, Information Search, and Choice," Unpublished Masters Thesis, Purdue University, West Lafayette, Indiana.

Tversky, Amos (1972), "Elimination by Aspects: A Theory of Choice," __Psychological Review__, 79, (July), 281-299.

Westwood, D., Lunn, T. and Beazly, D. (1974), "The Trade-Off Model and its Extensions," __Journal of the Market Research Society__, 16, 227-241.

Wright, Peter (1975), "Consumer Choice Strategies: Simplifying Vs. Optimizing," __Journal of Marketing Research__, 12, (February), 60-67.

----------------------------------------

Tweet
window.twttr = (function (d, s, id) { var js, fjs = d.getElementsByTagName(s)[0], t = window.twttr || {}; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = "https://platform.twitter.com/widgets.js"; fjs.parentNode.insertBefore(js, fjs); t._e = []; t.ready = function (f) { t._e.push(f); }; return t; } (document, "script", "twitter-wjs"));