Alternative Formulations of a Quality-Of-Service Measure
Citation:
James B. Wiley and Paul D. Larson (1993) ,"Alternative Formulations of a Quality-Of-Service Measure", in E - European Advances in Consumer Research Volume 1, eds. W. Fred Van Raaij and Gary J. Bamossy, Provo, UT : Association for Consumer Research, Pages: 329-337.
[Research for this paper was supported by a grant from the Social Science and Humanities Research Council of Canada.] The approaches used to operationalize the concept "quality-of-service" to date primarily have been compositional in nature. This paper compares and contrasts a variety of strategies for operationalizing the quality-of-service construct within the framework of Coomb's Theory of Data. It also illustrates how recently formulated scale development methodologiesBespecially those based on item response theoryBcan be used to measure quality-of-service in the retail sector. The issue of quality, including service quality, is attracting great interest among governments and businesses (Berry and Parasuraman, 1991). The Japanese government gives the yearly Deming award and the U.S. government awards the Baldrige prize. Recently (Oct. 25, 1991), Business Week magazine published a special issue on quality, "The Quality Imperative." In a sidebar, the section on services proclaims "The main problem: services are harder to measure that widgets are." The Nov. 11, 1991 issue followed up with the cover proclaiming "Value Marketing: It's the Way to Sell in the 90's." There is considerable agreement among scholars regarding many aspects of the quality-of-service construct. There is consensus that it should be defined from the perspective of the consumer, it is their perceptions of what they receive that are important, and these perceptions have multiple aspects or attributes, i.e., they are not unidimensional. While the nature of quality-of-service is of great interest, and there have been efforts to measure it (Parasuramam et. al., 1988), there currently is not a widely accepted tool for measuring quality-of-service at the retail level. For example, the SERVQUAL instrument, which provides the most widely used tool for measuring perceived quality-of-service (Parasuraman et. al., 1985, 1988), includes no items that pertain to merchandise. This paper describes decompositional procedures that might be used to measure the quality-of-service at the retail level. The procedures are compared with previously used approaches on several methodological dimensions, notably:
* Distance versus vector model. The SERVQUAL instrument posits a vector model for the process which links performance to satisfaction, i.e., the "optimal" level of performance on a dimension is at its end, positive or negative. The dimension is divided into "satisfying" and "dissatisfying" ranges based on the respondent's expectations regarding performance. Multi-attribute models, which are widely used in marketing, are also based on a vector model. An alternative to the vector model is the distance model formulation which posits an ideal,expected, or norm level of performance which is not at the "end" of the attribute. Deviations of performance from this ideal, in any direction, potentially can result in dissatisfaction. The expectancy-value modelBdiscussed belowBis a special case of a distance model. Coombs (1964) provides the classic discussion of distance versus vector models.
In the following discussion it will be assumed that the reader is familiar with distance versus vector models and self-explicated (compositional) versus decompositional inference of positions on attributes. A theory based procedure is outlined in the last section of the paper. Illustrations of decompositional analyses of several of the question formats discussed below are found in Wiley and Larson (1992).
SERVICE QUALITYBCONCEPTUAL DEFINITIONS
Most if not all conceptualizations of quality-of-service are based on linear compensatory models. [It frequently is hypothesized that choice is based on a two-stage procedure where a conjunctive procedure is used to select items entering the evoked set and a compensatory model is used to select among the items in the evoked set. The evoke set consists of items which have a non-zero probability of being selected. A conjunctive process requires alternatives to exceed some minimum standard of performance on each attribute, or subset of attributes, in order to be considered for choice. Compensatory models imply that alternatives can compensate or make-up for being deficient on one attribute with more than sufficient performance on others. Einhorn (1970) endeavored to operationalize a conjunctive-like process using self-explicated data. Wiley (1975) attempted to provide decompositional procedures for diagnosing conjunctive-like processes.] The basic linear compensatory model can be represented as:
Sjk=SWik [Pijk - Iik]r, where (1)
i=the attribute or dimension of performance, j=the firm, and k=the respondent. Then, Sjk=respondent k's satisfaction score with firm j, Wik=the importance weight given attribute i by respondent k, Pijk=respondent k's perception of the performance of firm j on attribute i, and Iik is the respondent's expectation (or norm) for performance on attribute i. The coefficient r differentiates between the vector (r=1.00) and the euclidean distance model (r=2.00).
The potential advantage of a multi-attribute conceptualization of "quality-of-service" over a simpler "overall" judgement is that knowledge of the structure of the evaluation can lead to diagnosis of the strengths and weaknesses of the firm's performance. However, two of the model's components, the weights and the expectations, have proved problematical across many applications.
The importance weights Wik provide for individual differences in the compensatory rate of substitution between dimensions of service quality provided by firms. For example, segments might be formed of consumers who were similar with respect to the trade-offs they will accept between attributes. However, there is considerable evidence that weights do not add to the ability to predict when included in a self-explicated model. For example, Lehmann (1971) found inclusion of weights contributed little to the ability of a multiattribute model to predict television show preference. This result has been replicated in numerous other studies. One reason weights may contribute little to predictive power is that responses to the other model componentBbeliefs in multi-attribute models or performance perceptions in a expectations-performance modelBmay implicitly incorporate the importance information. For example, respondents may give a wider range of responses or respond with greater reliability to important attributes. Work by Bass and Wilkie (1973) using a multi-attribute model indicates that normalizing belief measures before inclusion in the model increased the ability to predict a criterion. This result implies some correlation between the weight of the attribute and the variance of responses on it.
A second reason weights might contribute little to prediction in self-explicated models is that the importance judgments are unreliable. Wiley (1977) reported there was little test-retest reliability in self-explicated importance ratings over time in a multi-attribute model.
Finally, the relationship between level and value is conceived to be monotonic in virtually all multi-attribute formulations, including the relationship between performance and satisfaction in expectations performance multiattribute models. It is well known that linear models are extremely robust when the functions linking independent and dependent variables are monotonic.
Inclusion of self-explicated ideal-points, Iik, in self-explicated models has proven to be even less fruitful than the inclusion of self-explicated importance weights. Four of the six papers reviewed by Wilkie and Pessimier (1973) that addressed the issue found that including ideal points did not contribute to predictive power. Koelemeijer (1991) reports similar results in the context of service quality measurements. Wiley (1977) found unreliability of self-explicated ideal-point measurements over time. Wiley, MacLachlan, and Moinpour (1976) found little correspondence between self-explicated and inferred ideal-points. The results with self-explicated ideal-points across a variety of areas suggests that respondents have trouble comprehending the concept. On balance it appears that inclusion of weights and ideal points, such as expectations, contributes little to predictive power when self-explicated methodology is used. Furthermore, Wall and Payne (1973) provide results which indicate that calculating difference scores in the fashion of expectation-performance models masks the true relationship between variables, even when the true relationship has a form such as Equation 1. Peterson and Wilson (1992) provide related arguments specifically targeted to consumer satisfaction research. However, the importance ideal-point constructs remain compelling because of their possible segmentation ramifications. Decompositional procedures provide an alternative way of getting at both components, so the conceptual definition of the constructs remains of interest.
There are two major conceptualizations of the constructs: the expectations/performance and the disconfirmation. Each conceptualization posits that satisfaction is related to the level of performance of the retail outlet on a bundle of service attributes and that perceived performance levels are differentially valued by consumers. The expectations/performance formulation assumes a vector model in which value is at least monotonically related to level. The disconfirmations formulation admits the possibility of a distance model where value is monotonically related to the discrepancy between performance and some ideal, expected, or norm level on the attribute. Whether satisfaction or dissatisfaction is the outcome of a service encounter depends on the correspondence of the perceived level of performance with a benchmark expectation or norm level. In the case of the expectations-performance view, the benchmark is the consumer's expectation for the firm's performance on the attribute. The disconfirmations view is that consumers' experienced based norms for performance are a more appropriate benchmark.
The expectancy-value conceptualization, which has not received much attention in the service quality literature, holds that attributes are states which do or do not characterize objects. Consumers may differentially value the attribute state. They also may be uncertain whether the attribute is associated with the object and discount the value they will receive proportionately to their uncertainty of receiving it.
The Expectations/Performance Approach
The underlying conceptualization of this view is that consumer satisfaction occurs when the outcome of an encounter meets or exceeds the consumer's expectations. The expectations are predictions of the nature and level of performance that the consumer will receive. Dissatisfaction occurs when performance fails to meet expectations (Oliver, 1979). Expectations in this view can be based on direct or indirect experiences. They are provider specific, but they may differ with usage context. For example, a user's expectations regarding a bank's services may differ depending on whether he or she is making a deposit/withdrawal or seeking a mortgage renewal. The integration of expectations-performance judgments is conceived to be compensatory in nature, i.e., unsatisfactory performance on one attribute can be compensated for by satisfactory performance on another attribute.
The Disconfirmation Approach
The predictive nature of the expectations construct raises the question of what happens when the consumers expectations for the quality of service provided by a firm are very low and the firm exceeds these low expectations. Does this lead to consumer satisfaction? Or, does the consumer remain unsatisfied until the firm performs at some normative rate?
Woodruff, Cadette, and Jenkins (1983; Cadette, Woodruff and Jenkins, 1987) propose that performance is compared to experienced based norms rather than expectations. Experienced based norms derive from the consumer's perceptions of what the firm should do. For example, if a firm exceeds the low expectations that a customer has for it, satisfaction may not follow because the consumer believes the firm still is not performing at the level it should. The above authors also hypothesize a zone of indifference between satisfaction and dissatisfaction levels, i.e., a service experience must be outside an acceptable range of performance before it is viewed as either a positive or negative disconfirmation.
The Expectancy-Value Approach
This approach differs from the above two in that it does not admit "degrees" of performance on attributes. Implicitly, it is assumed that consumers have needs regarding the attributes and outlets either conform to these needs or they do not. If the need is met, the consumer gets 100% value vis-a-vis that attribute. This view represents a "step" version of a distance model in that the firm must meet consumers' needs within an acceptable range. Deviations outside the acceptable range in any "direction" result no contribution toward satisfaction. While the possibility of over performance on an attribute not leading to increased satisfaction may seem counter-intuitive, one need only recall the occasional overly solicitous waiter or store clerk to provide an example of the phenomenon.
OPERATIONALIZING THE CONCEPTS
The SERVQUAL instrument operationalizes the service quality concept by having respondents make self-explicated judgments to the two highlighted questions in sections QIb and QIIa of Table 1. Koelemeijer (1991) tested a disconfirmation approach which was operationalized using the highlighted question in section QIa of Table 1. The difference between these two formulations can best be appreciated in the context of a classification of data types, such as the typology developed by Coombs (1964). The Coombsian typology also suggests other types of questions that might be asked in the context of service quality research at the retail level. A variety of questions pertaining to quality-of-service are provided in the course of illustrating the typology. Decompositional procedures have been developed for the various facets of the typology. Thus, one advantage of classifying questions in terms of the typology is that, once a question is correctly classified, appropriate procedures for analyzing responses to it can be identified. A decompositional approach for analyzing SERVQUAL data is suggested by the typology and outlined below. A theory based procedure for analyzing disconfirmation data is described in more detail in a subsequent section.
Coombsian Data Typology
The focus of the following discussion is on the facets of the typology that directly pertain to expectations/performance and disconfirmations models of the quality-of-service construct. However, the remaining facets are briefly discussed and illustrated for completeness. Illustrative questions are provided to indicate the variety of additional question formats available for research in the area quality-of-service.
The Coombsian typology maps questions into points or vectors in a hypothesized perceptual space. Responses to questions are taken to reveal a) order or b) proximity relations on vectors, points, or sets of these points. A vector, or point, generally can be identified with one of three sets: stimuli, respondents, or questions. For example, the typical "perceptual map" appearing in current introductory marketing textbooks is one of three types. The simplest presents a configuration of points representing brands in a perceptual space of underlying dimensions upon which respondents presumably base their judgments of similarity among the brands. The second type adds points representing respondents or segments to the previous configuration. These ideal-points identify the most preferred level of each of the underlying perceptual attributes. The third type adds vectors representing questions or items to the first or second type of configuration. The vectors are oriented so that the angle between the vector representing a question and a perceptual dimension is proportional to the correlation between the order of brands' ratings on the question and their ordering on the respective perceptual dimensions, with high correlations corresponding to small angles between the vectors and the the dimensions.
Quadrant I: Preferential Data. Data in this quadrant involves relationships on pairs of pairs of points. For example, the first question in Table 1, section QIa asks the respondent to indicate which store comes closest to meeting their expectations for dress and neatness. Under a distance model, the interpretation would be that the respondent has some preferred level on a "neatness" attribute. When making the required judgement, respondents compare the level they perceive each store to deliver with their preferred level and then they pick the store that comes closest to their preferred level. Interpreted as a distance judgement, the respondent is judging the distance between the point representing their own position and the points representing the respective stores. Hence, the judgements are for distances between pairs of points from two classes: respondents and retail outlets. The judgment indicates which distance between pairs of points is shortest. [To some a distance metaphor stretches credulity in that it apparently hypothesizes a very efficient euclidean geometer residing in respondents' brains. In fact, Coombs made explicit the implicit assumptions underlying most data analysis in the social and behavioral sciences. In general it is numbers that are analyzed, not behavior, and numbers and their inter-relationships have ready geometric interpretation.]
The second question in this section (Koelemeijer, 1991) requires a more complex judgement involving points from three sets. The respondent must make the distance judgments as with the previous question, but then must compare the distances with partitions on a "1" to "7" numerical scale. The numerical labels presumably demarcate the continuum into eight zones: below "1", between "1" and "2", between "2" and "3", and so forth. It may be imagined that some value on this scale (such as "4") corresponds to just matching the respondents expectations. If the firm fails to meet expectations, the respondent must judge whether the discrepancy is below the position of "1" on the continuum, between the position of "1" and "2", etc. Thus, there are three class of points involved: those corresponding to respondents or segments, stores, and rating scale boundary values. A theory based procedure for analyzing data of this sort is described below.
The last two questions in this section are conceptually similar procedures for eliciting attribute/store associations. Under a distance model, data from the second to last question would be represented as a asymmetric matrix with attributes corresponding to rows and stores corresponding to columns. The numbers in the cells of the matrix would indicate the rank order distance between the columns and the attribute anchoring a row. The position of the rows and columns would be reversed the case of the last question. In both cases, a multi-dimensional unfolding analysis could be used to analyze the data matrices (Coombs, 1964).
QIb. The data corresponding to the QIb section of Table 1 corresponds to proximity judgements on pairs of KX. This data differs from the QIa data in that the comparison is not with a point representing the respondent's most preferred level, but rather a range representing acceptable levels on the attribute. The first question represents a way of operationalizing the expectancy part of the expectancy-performance conceptualization of satisfaction (Parasuraman et. al., 1988), which they operationalize with a Lickert-like scale. [One way to analyze Lickert scale data is to treat the response categories as sets of proximity zones. For example, items the respondent "strongly agrees" with presumably must be relatively proximate to the position most acceptable to the respondent. Items the respondent "agrees" with presumably lie outside the "strongly agree" zone, but are more proximate than a "neutral" zone, and so forth. Becherer, et. al. (1981) provide an illustration of this analysis.]
The second question is related to the economic concept of indifference sets, from which indifference curves could be estimated. That is, the respondent is presented with bundles of services and asked to indicate which bundles that provide equal utility. The third question represents a way of operationalizing the expectancy-value conceptualization of satisfaction. That is, the respondent is presumed to have a zone of acceptance. Attributes sufficiently closely associated to the store will fall within this zone of acceptance and be classified by the respondent as characterizing the store. Note that different respondents may have different zones of acceptance.
Quadrant II: Single Stimulus Data. The QIIa data represents ordering relations on sets from two sets. For example, in the first question in the QIIa quadrant of Table 1, the sets are points representing stores and points representing boundaries corresponding to the integers 1, 2, 3, ..., 7. When asked to use the scale, the respondent presumably is comparing the position on the scale corresponding to their perception of the friendliness of firm XYZ employees with the position of the boundaries. For example, if the position of the store employees are perceived to be very unfriendly, the position of the store would be less than the position of "1" on the scale, and the appropriate response would be "1". If the position of the store was greater than the position representing "1" but less than the position representing "2", the appropriate response would be "2", and so forth. A probabilistic version of this item response logic is provided below.
QIIb. The data in this quadrant correspond to proximity relations on points from different sets. In the case of the first question, the points representing the stores and the points representing the zone within which the respondent is willing to classify the store as having "friendly" employees. The second question represents a typical operationalization of the evoked set concept. The proximity formulation implies a conjunctive-like process, since the item must be within the zone of acceptance on all attributes to be accepted.
Quadrant III: Stimulus Comparisons Data. Data in the QIIIa quadrant represent order relations on points from the same set. In the case of the illustrative question the respondent is conceived to compare the position of the point representing store A or the "friendliness" continuum with the point representing store B. The response reveals which point is perceived to be further along the continuum.
The QIIIb quadrant represents proximity relations on points from the same set. For example, in responding to the illustrative question, the respondent presumably reveals stores whose points are perceived to be "close" on the friendliness continuum.
Quadrant IV: Similarities Data. The data in quadrant QIVa represent order relations on pairs of points from the same set. For example, when ask to indicate whether outlet A is more similar to B than C is to D, the respondent implicitly determines the "distance" between A and B and compares it to the distance between C and D. The pair of points that are closest in the space spanned by a hypothesized set of perceptual attributes is the pair that is most similar. The first question would be appropriate when an investigator did not want to prompt the respondent for the attribute(s) to consider. The second question would be appropriate when the investigator wanted the respondent to consider a specific attribute.
QIVb data provides information on pairs of points that are perceived to be close. The illustrative question differs from previous proximity questions in that a definition for acceptance in suggested with the first part of the question and the respondent is asked to indicated whether any other pairs of stores fall within the suggested range.
An Alternative Conceptualization of Expectation-Norm Questions
The first questions in the QIb and QIIa sections of Table 1 are the operationalization of the performance question of the SERVQUAL instrument. A recurrent issue that has been raised regarding the operationalization of SERVQUAL is the appropriate conceptualization of the expectations construct. It is introduced as a point within the SERVQUAL formulation. However, a vector model is implicit in the use of factor analysis to analyze responses to the two SERVQUAL questions. Questions are represented as vectors in factor analysis and factor loadings correspond to the correlations between the questions and the hypothesized underlying factors.
An alternative approach would be to view the expectations component as a vector. The vector representing the degree to which stores should be characterized by the item would be positioned in the factor space so that the angle between the expectation-norm vector and the respective item vectors would be small for those questions which should be associated with stores and large for those items which should not be associated with stores. The resulting expectations-norm vector would identify the direction in the performance space associated with the respondent's satisfaction or dissatisfaction. Segments would consist of bundles of normative expectation-norm vectors with approximately the same orientation in the performance space. Programs such as PREFMAP, or procedures such as multivariate analysis of variance, can be used to map expectations data into a factor space of performance judgements.
THEORY BASED MEASUREMENT
A psychometrically sound approach for developing a quality-of-service scale based on a vector model for QIa data follows from work by Bechtel and Wiley (1983). A scale is defined here as a variable whose value is inferred from a composite of the responses to multiple questions.
The approach has three elements: a model of the response process, a distribution theory, and an estimation procedure. The model of the response process is based on the "law of categorical judgement," itself patterned on Thurstone's law of comparative judgement (1927). The law of categorical judgement consists of equations relating stimuli parameters and category boundary parameters to the cumulative proportions each stimulus is judged to be in each response category of a set of categories which are ordered with respect to a given attribute (Torgerson, 1958). The basic assumptions regarding the response process are as follows:
1. The psychological continuum of the respondent can be divided into a specified number of ordered categories or steps.
2. Owing to a number of factors, respondents may not place a particular category boundary at the same point on the continuum. Rather, it projects a distribution of positions on the continuum.
3. Likewise, the respondents will not necessarily judge a stimulus to reside at the same point on the continuum. It too will project a distribution of positions on the continuum.
4. A respondent judges a given stimulus to be below a given category boundary whenever the value of the stimulus on the continuum is less than that of the category boundary.
COOMBSIAN CLASSIFICATION SYSTEM
The distribution theory for the model is based on an extension of individual choice theory, as developed by MacFadden (1974) and Yellot (1977). The extension is to aggregate responses for ordered categories (Antrich, 1978; Bechtel and Wiley, 1983). Faced with a scale item, it is assumed that respondents compare their own position on the psychological continuum with the positions of the item's category boundaries and their response is governed by the differences between the values of these respective parameters. Following this approach, it is assumed that the distributions projected by the respondents' positions and the items' category boundaries are independently, identically distributed according to the double exponential distribution. The observed cumulative response distributions are posited to be a function of the cumulative distribution of difference scores on the psychological dimension. The cumulative distribution for differences between two double exponential variables is the cumulative logistic distribution. Thus, the distribution theory points to a logit analysis based on cumulative proportions.
THE RAW FREQUENCY MATRIX F (HYPOTHETICAL DATA)
Organizing the Data
Table 2 provides data for two strata taken from a hypothetical quality-of-service survey. The immediate data will consist of the frequency with which respondents in each strata, such as customers versus non-customers of the firmB responds positively, neutral, or negatively to each of the questions defining a scaleBsuch as the five questions defining the reliability scale of the SERVQUAL instrument. These observed frequencies may be arranged in the form of two n x m+1 matrices Fs, s=1,2 (in the case of two strata); where the rows correspond to the questions (j=1,2,..n) and columns, the categories (k=1,2,..m+1). An element of fsjk is the number of members of strata s that responded to the kth category of question j.
The matrix C in Table 3 in a n x m+1 matrix whose elements c'sjk are equal to the number of times a respondent in segment s responded below the kth category boundary of question j. The matrix C is constructed from F by cumulating to the right. In general, c'sjk=k=1Ekfsjk. In order to conserve space, the remaining tables illustrate calculations only for the first segment.
The matrix P in Table 4 is a n x m matrix whose elements give the proportion of times a respondent in segment s responded below the kth category boundary. In general, p'sjk= c'sjk/c'sj,k+1.
The matrix L in Table 5 is a n x m matrix whose elements consist of the logits of the elements of the matrix P. In general, l'sjk=p'sjk / (1 - p'sjk). The logitsBor log-odds of responding below a category boundaryBare the dependent variable for modeling the responses to the rating scales.
The elements lsjk (Table 5) are the log-odds of the elements of Table 4. Equation 2 expresses these log-odds as a linear function of the parameters of interest. Procedures for estimating the model parameters are outlined in the Appendix.
Modeling the Data
The first step is in modeling the data is to represent them in vector format. Thus, the twenty elements of matrix P in table 4 may be organized as a vector of 20 cumulative proportions:
p'= {p111,p112,..,p151,p152,p211,p212,...,p222}
= {.50 ,.94,..., .02, .31, .48, .91,...,.34}
This vector is one sample estimate of the vector p' of "true" cumulative proportions.
The true cumulative proportions may be viewed as the expectation of a probabilistic response process:
lsjk= [ as - (uj + tj(k)) ] + e, where(2)
lsjk= ln { psjk / (1 - psjk) }, and
as= a fixed (population) expectation or norm value in segment i, (i=1,2),
(uj + tj(k))= a fixed (population) boundary value for each item j's boundaries k, (j=1,..n; k=1,..m). Note, the item boundary values are additively decomposed into specific item and item boundary components (item boundaries are nested within items), the uj's are the item intensity of item j and the tj(k)'s are the item's boundaries expressed as deviations from the intensity value, and
e= error due to inadequate parameterization or incorrect specification.
THE CUMULATIVE FREQUENCY MATRIX C
THE CUMULATIVE PROPORTION MATRIX P
THE LOGISTIC TRANSFORMATION MATRIX L
Figure 1 illustrates the item response functions upon which the procedure is based. Item response functions for two hypothetical questions having three response categories are provided. Question 1 is taken to have an intensity uj of -1.00 with a width between the upper and lower category boundaries of 1.00 Thus, the position of the upper boundary is -.50=(-1.00 +.50) and the position of the lower boundary is -1.50=(-1.00 -.50). Question 2 is taken to have the position of +1.00 with a range of 2.00 units between the upper and lower boundaries. Thus, the position of the upper category boundary is at 2.00=(+1.00 + 1.00), while the position of the lower boundary is at 0.00=(+1.00 - 1.00). The position of the items is found along the horizontal axis. The vertical axis gives the probability of responding above the category boundaries for segments having as values ranging from -2.2 to +3.30. Following the horizontal line which intersects the vertical axis at the value .50, it is seen that it intersects the curve labeled Q1=-1 Above Lower at the value -1.50 on the horizontal axis indicating that respondents having an expectation or norm as of -1.50 would respond above the lower boundary of Question 1 with a 50% probability and below that boundary with 50% probability. It intersects the response function for the upper boundary at a value of -.50 indicating that respondents with an expectation or norm corresponding to -.50 would have a 50% probability of responding above that boundary and a 50% probability of responding below it.
Taking the position of the expectation at 1.00, for example, the percentages of responses above the upper boundary of question one is 99% and above the lower boundary it is 93%. The percent responding above the upper boundary of question two is 15%, while the percent responding above the lower boundary is 85%. The Appendix provides procedures for inferring the position of the parameters given observed patterns of response to the questions.
ITEM RESPONSE CURVES FOR BOTH QUESTIONS, THREE CATEGORIES
CONCLUSION
Research into quality-of-service has been expanding rapidly during the past decade. Compositional procedures based on self-explicated data have been used to date. However, these approaches are subject to some well known disadvantages, especially when difference scores are calculated (Wall and Payne, 1973; Peterson and Wilson, 1992). Decompositional procedures offer an alternative methodology for analyzing service quality data. Coombs (1964) provides a typology for classifying decompositional procedures. Algorithms have been developed for the various facets of the typology, so once the question in correctly classified, procedures for its analysis can be identified. Furthermore, numerous alternative varieties of questions that might be ask in connection with service quality are suggested by the typology. This paper discusses and illustrates a number of the above points.
APPENDIX
Since the logits, lsjk, of Equation 2 are heteroscedastic and correlated, cross-sectional analysis is based on generalized least squares (GLS) estimation procedures. The general linear model to be estimated may be represented as:
L = X b + E , (3)
where L is the vector of logits, X is a (reduced rank) design matrix, b { as : uj : tj(k) } is the vector of parameter estimates, and E is a vector of errors. The GLS estimates of b are given by:
b = ( X' S-1 X)-1 X' S-1 L, (4)
where S is the covariance matrix of the logits.
An advantage of the present approach, provided the raw counts are available, is that the full covariance matrix of observations is available, since the covariance between any pair g,h of logits is estimated by:
cov(lg,lh) = (pgh - pgph) / [npgph(1-pg)(1-ph)], (5)
where n is the sample size, pg and ph are the gth and hth elements of p', and pgh is the proportion of those who responded jointly to the gth and the hth elements of p'. The familiar formula for the variance of a logit results when g=h:
var(lg) = [ n pg (1 - pg) ] -1. (6)
If only the summary proportions by question are available, the covariance of responses between questions may not be computed. However, the covariances of the intra-item responses may be calculated. Since the probability of responding to more than one category of a given item is zero, pgh=0 in (5). Hence,
cov(lg,lh) = - pgph / [npgph(1-pg)(1-ph)], (7)
and a block diagonal matrix used for S.
Wiley and Bechtel (1984) illustrate how hypotheses may be formulated and tested within the above framework.
REFERENCES
Andrich, D.(1978) "A Rating Formulation for Ordered Response Categories" Psychometrica, 43, 561-573.
Bass, F. M. and W. L. Wilkie (1973) "A Comparative Analysis of Attitudinal Predictions of Brand Preference," Journal of Marketing Research, 10, 262-69.
Becherer, R., E.R. Riorden, J.B. Wiley, L.M. Richard (1981) "Integrating Empirical Research Into the Public Policy Process: An Illustration", Decision Science, 12, pp. 633-44.
Bechtel, G.G. and J.B. Wiley (1983), "Probabilistic Measurement of Attributes: a Logit Analysis by Generalized Least Squares," Marketing Science, 2, 389-405.
Berry, L. and A. Parasuraman (1991), Marketing Services: Competing Through Quality, New York: The Free Press
Cadotte, E.R., R.B. Woodruff, and R.L. Jenkins (1987)"Expectations and Norms in Models of Consumer Satisfaction," Journal of Marketing Research, 24, 305-14.
Carmen, J. M. (1990) "Consumer Perceptions of Service Quality: An Assessment of the SERVQUAL Dimensions," Journal of Retailing, 66:1, 33-55.
Coombs, C. (1964z) Theory of Data, New York : John Wiley & Sons.
Einhorn, H.J. (1970) "The Use of Nonlinear, Noncompensatory Models in Decision Making," Psychological Bulletin, 73, 221-30.
Green, P.E. and F.J. Carmone (1970),Multidimensional Scaling and Related Techniques in Marketing Analysis, Boston:Allyn and Bacon, Inc.
Koelemeijer, K. (1991), "Perceived Customer Service Quality: Issues on Theory and Measurement," proceedings Sixth World Conference on Research in the Distributive Trades, Sponsored by the Dutch Ministry of Economic Affairs, Services Directorate, 68-76.
Lehmann, D. R. (1971) "Television Show Preference : Application of a Choice Model," Journal of Marketing Research, 8, 47-55.
MacFadden, D.(1974) "Conditional Logit Analysis of Qualitative Choice Behavior," in Frontiers in Econometrics, Paul Zarembka, ed., New York: Academic Press, 105-142.
Oliver, R.L. (1979) "Product Dissatisfaction as a Function of Prior Expectation and Subsequent Disconfirmation: New Evidence," in New Dimensions of Consumer Satisfaction and Complaining Behavior , R.L. Day and H.K. Hunt, eds. Bloomington:Indiana University, 66-71.
Parasuraman, A., V. A. Zeithaml, and L. L. Berry (1985), "A Conceptual Model of Service Quality and Its Implications for Future Research," Journal of Marketing, 49 (Fall), 41-50.
Parasuraman, A., V. A. Zeithaml, and L. L. Berry (1988), SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality," Journal of Retailing, 64, (Spring), 12-40.
Peterson, R.A, and W. R. Wilson (1992) "Measuring Customer Satisfaction: Fact and Artifact," Journal of the Academy of Marketing Science, 20, 61-71.
Thurstone, L.L.(1927) "A Law of Comparative Judgement," Psychological Review, 34, 273-286.
Torgerson, W.S.(1958) Theory and Methods of Scaling, New York: John Wiley & Sons, Inc.
Wall, T.B., and R. Payne (1973) "Are Deficiency Scores Deficient?" Journal of Applied Psychology, 58, 322-326.
Wiley, J.B. (1975) "An Evaluation of Multifactor Attitude Models Based on 'Order K/N' Data," Unpublished doctoral dissertation, Universitly of Washington, Seattle, Washington..
Wiley, J.B. (1977), "Stability of Inferred 'Importance' and 'Ideal Points' in Additive Models", in H.C. Schneider, (ed.), Proceedings American Institute for Decision Science, San Francisco, IL, 192-94.
Wiley, J.B. and G.G. Bechtel (1984),"Evaluating Societal Changes in Attitude," Psychological Bulletin, 96, 173-184.
Wiley, J.B., D.L. MacLachlan, and R. Moinpour (1976), "Comparison Stated and Inferred Parameter Values in Additive Models: An Illustration of a Paradigm," in W. Perrault Jr., (ed.), Advances in Consumer Research, Vol. 4, Atlanta, GA, 98-105.
Wiley, J.B. and P.D. Larson (1992), "Decompositional Analysis of Selected Retailing Questions," Working Paper, Department of Marketing and Economic Analysis, University of Alberta.
Wilkie, W. L. and E. A. Pessimier (1973) "Issues in Marketing's Use of Multiattribute Attitude Models," Journal of Marketing Research, 10, 428-41.
Woodruff, R.B., E.R. Cadotte, and R.L. Jenkins (1983), "Modeling Consumer Satisfaction Processes Using Experience-Based Norms," Journal of Marketing Research, 20, 296-304.
Yellot, J.I.(1977) "The Relationship between Luce's Choice Axiom, Thurstone's Theory of Comparative Judgement, and the Double Exponential Distribution," Journal of Mathematical Psychology, 15, 109-144.
----------------------------------------
Authors
James B. Wiley, University of Alberta, Edmonton, Canada
Paul D. Larson, University of Alberta, Edmonton, Canada
Volume
E - European Advances in Consumer Research Volume 1 | 1993
Share Proceeding
Featured papers
See MoreFeatured
Promoting Well-being and Combating Harassment in the Academy
Ekant Veer, University of Canterbury, New Zealand
Zeynep Arsel, Concordia University, Canada
June Cotte, Ivey Business School
Jenna Drenten, Loyola University Chicago, USA
Markus Geisler, York University, Canada
Lauren Gurrieri, RMIT University
Julie L. Ozanne, University of Melbourne, Australia
Nicholas Pendarvis, California State University Los Angeles, USA
Andrea Prothero, University College Dublin
Minita Sanghvi, Skidmore College
Rajiv Vaidyanathan, University of Minnesota Duluth, USA
Stacy Wood, North Carolina State University
Featured
Want to Stick to Your Goals? Think about “Dissimilar” Alternatives that You’ve Forgone!
Hye-young Kim, University of Chicago, USA
Oleg Urminsky, University of Chicago, USA
Featured
Shades of Rejections: The Effect of Rejection Frames on Commitment to Choice
Jen H. Park, Stanford University, USA
Itamar Simonson, Stanford University, USA