Repositioning Demand Artifacts in Consumer Research

Jean Perrien, University of QuTbec, MontrTal
ABSTRACT - This paper investigates the current status of demand artifacts in consumer research and suggests a new definition of demand artifacts. It is argued that demand artifacts are far from being a major concern in consumer research although they may potentially damage a theoretical relationship. Futhermore, this paper suggests linking demand artifacts to the identification of experimental manipulations, not hypothesis guessing. This new definition of demand artifacts reduces the subjective assessment of the presence or absence of demand biased responses when demand artifacts depends solely on hypothesis guessing.
[ to cite ]:
Jean Perrien (1997) ,"Repositioning Demand Artifacts in Consumer Research", in NA - Advances in Consumer Research Volume 24, eds. Merrie Brucks and Deborah J. MacInnis, Provo, UT : Association for Consumer Research, Pages: 267-271.

Advances in Consumer Research Volume 24, 1997      Pages 267-271

REPOSITIONING DEMAND ARTIFACTS IN CONSUMER RESEARCH

Jean Perrien, University of QuTbec, MontrTal

[The preliminary draft of this paper was written when the author was a visiting professor at ESSEC (France).]

ABSTRACT -

This paper investigates the current status of demand artifacts in consumer research and suggests a new definition of demand artifacts. It is argued that demand artifacts are far from being a major concern in consumer research although they may potentially damage a theoretical relationship. Futhermore, this paper suggests linking demand artifacts to the identification of experimental manipulations, not hypothesis guessing. This new definition of demand artifacts reduces the subjective assessment of the presence or absence of demand biased responses when demand artifacts depends solely on hypothesis guessing.

In their quest for objective knowledge,( scientific) consumer researchers are eager to control as many sources of error as possible. Even if the pursuit of objectivity looks like the" quest for Holy Greil", it remains a legitimate goal (Hunt 1993). Among the myriad of sources of error consumer researchers face within an experiment, the very nature of experimental units-human beings, with feelings, emotions and biases- constitutes a real challenge. Human beings may react to experimental manipulations in a way that is not expected by researchers, depending on the role they adopt during the experiment. This unexpected experimental behavior creates some demand artifacts (Sawyer 1975). When, in an experiment the plausibility of demand artifacts may not be discarded, it offers a rival explanation that challenges theoretical construction, and therefore objectivity, in consumer research.

Five years ago, in a provocative, albeit stimulating article, Shimp Hyat and Snyder (1991), challenged most of our understanding on demand artifacts. Among other things, they argued that demand artifacts were often accentuated as a source of error, they must be viewed as a random source of error (up until their analysis, demand artifacts were defined as a systematic source of error).

This paper follows in Shimp, Hyatt and Snyder’s footsteps by exploring the current and actual status of demand artifacts in consumer behavior experiments and proposes a new -and less subjective- definition of demand artifacts.

DEMAND ARTIFACTS: A FRAMEWORK

Most of the available literature makes demand artifacts dependent on the ability of experimental units to identify the research hypothesis ( Orne 1962, Rosenberg 1969, Rosnow and Aiken 1973, Sawyer 1975). When the experimental unit identifies the research hypothesis and that she/he adopts a role resulting from this guessing (experimental units may act as good, negative, apprehensive or faithful subjects, see Berkowitz and Donnerstein 1982), it is most likely that experimental results will be "demand biased". Consequently, it follows that observations on dependent measures will include an additional term of error.

Shimp, Hyatt and Snyder (1991, p. 274; thereafter identified as SHS) agree with this view of demand artifact. They formalize the probability of demand biased responses for an experimental unit i as follows:

Pr(Bi)=Pr(Ei) X Pr (Di/Ei) X Pr(Ai/Di) (eq.1)

where:

Pr(Bi)=probability that the i th subject is demand biased; Pr(Ei) probability of encoding a demand cue; Pr(Di/Ei)=conditional probability of discerning the true experimental hypothesis or a correlated hypothesis; Pr(Ai/Di)=conditional probability of acting on the hypothesis.

This probabilistic formulation depicts the common understanding of demand artifacts. For SHS a demand cue is "a basis for discerning the experimental hypothesis." This definition has been widely accepted in the literature and stresses the key role of hypothesis guessing as the source of demand biased responses. Hence, role adoption derived from hypothesis guessing is at the origin of demand artifacts.

Recently, Darley and Lim (1993) question some of SHS’s assumptions. They argue that due to the underreporting of the hypothesis in postexperimental inquiries, the conditional probability of discerning the experimental hypothesis or a correlated hypothesis should be decomposed into two components (detecting and reporting the hypothesis versus detecting but not reporting the hypothesis). According to Darley and Lim (1993, p. 490): "... the existence of underreporting may hide the true nature and seriousness of demand artifacts." Furthermore, they challenge SHS’s third condition for demand biased responses (acting on the hypothesis by adopting a role) on the premises that "there is growing evidence to suggest that subjects will be affected directly or indirectly in their behaviors or responses if conditions one and two hold true, regardless of the presence or absence of condition three." SHS (1993) question these arguments in a reply which emphaizes the lack of empirical support of Darley and Lim’s statements.

In this paper we will evaluate consumer behavior research practices in relation to demand artifacts and propose a new definition of demand artifacts that we perceive as being less ambiguous that the casual understanding of demand artifacts we just presented.

DEMAND ARTIFACTS IN CONSUMER RESEARCH: CURRENT PRACTICES

SHS stress the fact that in the publication process, gatekeepers (i.e. reviewers) tend to over-emphasize demand artifacts as an alternative explanation for experimental observations. In order to assess to what extent demand artifacts are a real concern in experimental research on consumer behavior, a descriptive analysis of experimental research published between 1990 and 1993 (inclusive) was conducted by one judge. Articles investigating consumer behavior using an experimental methodology, published in the International Journal of Research in Marketing, the Journal of the Academy of Marketing Science, the Journal of Consumer Research, the Journal of Marketing as well as the Journal of Marketing Research were content analyzed. These journals are considered as major academic publications in the consumer behavior area and the official outlets of the four prominent associations in the field (ACR, AMA, AMS, EMAC). Moreover, articles published in these journals suffer less from length constraints than association proceedings. Hence, information on demand artifacts should be more exhaustive than anywhere else.

Out of all the published material, 259 articles referred to experimental manipulations conducted on consumers and were consequently subject to potential demand artifacts (IJRM: 2,7%, JCR: 54%, JMR: 31,3 %, JAMS: 6.2 %, JM: 5,8%). Demand artifacts may be a concern either in the design of an experiment (such as, for instance, the use of a disguise) or in the analysis of experimental results (e.g. removing observations from respondents who, in a post experimental inquiry, guessed the research hypothesis) or both. Hence, demand artifacts will be assessed either a priori (in the design of the experiment) or a posteriori (in the analysis of experimental results). Albeit, let us mention that such a content analysis does not intend to provide a perfect indicator of the academic attention toward demand artifacts.

From the beginning, it has to be recognized that a demand artifacts controversy may occur "behind the scenes" (i.e. during the reviewing process). However, if reviewers consider the information on demand artifacts assessment not worth including in the final paper (although through the reviewing process, it had been discussed as a potential rival explanation to experimental results), one may really question the statement according to which reviewers overemphasize demand artifacts. Moreover, enhancing knowledge is possible, from a sociological point of view, when the community, and not only reviewers, may not suspect the results of a research to be demand biased. In a sense, by focusing on explicit consideration for demand artifacts, this paper adopt a conservative approach that will have to be considered when interpreting results.

Only 43 articles (16.6 %) formally refer to demand artifacts a priori and 41 published experimentations (15.8%) indicate an a posteriori concern for this source of error. It must be pointed out that 65.1 % of articles which indicate a concern for demand artifacts in the design of the experiment (a priori) never refer to this source of potential error in the analysis of results (a posteriori). All in all, 183 articles (70.6 %) never mention any kind of concern for demand artifacts.

In his seminal work on demand artifacts, Sawyer (1975) points out potential procedures to reduce or to control demand artifacts. In this analysis of major experimental research conducted on consumer research we intend to assessto what extent researchers availed themselves of these procedures. Bearing in mind the rather low percentage of articles referring to demand artifacts, the results are not surprising: only 6 experiments (2.3 %) explicitly selected a research design which minimizes demand artifacts, 4 of them (1.5 %) specifically mentioned training the experimenters in view of reducing the occurrence of demand artifacts. Non-experimentation, a methodological solution to detect some demand bias, was reported in only one experiment, and hetero-method was never explicitly mentioned in any article. Let us mention that Darley and Lim (1993, p. 493) suggest relying on both the non-experimentation and hetero-method in the assessment of demand artifacts. Obviously, this is not yet the case. Nonetheless, 34 (13.1 %) published experiments indicated they relied on postexperimental inquiries (PEI) to assess respondents’ guessing of the research hypothesis. Although, the most intriguing result concerned the use of deception.

The essence of deception (disguise, cover story...) is to reduce demand artifacts by introducing a distance between the declared and the actual research objectives (i.e. hypothesis). Actually, this is the sole methodological role of deception, the consequences of which have been documented as ambiguous (Christensen 1977, Silverman 1977). Among the crFme de la crFme, 109 articles (42.1 %) explicitly incorporated deception in the design of their experiment, which means that more articles are referring to deception than to demand artifacts... although the latter is the sole justification for the former. The explanation is straightforward: deception is becoming part of the experimental process, despite the fact that it should be contingent to a high probability of demand biased responses. It may be argued that during "behind the scenes negotiations" between authors and reviewers deception was defined as a way to reduce the probability of demand biased responses. Once more, if this is the case, there is a risk of inconsistency between the information provided to reviewers and to the academic community. Dealing with articles published in the most prominent journals should dismiss such an inconsistency.

Finally, the type of design as well as the nature of respondents have been evidenced as potential sources of demand artifacts (Sawyer 1975). It has been documented that the usage of within subject designs as well as when relying on students, increases the probability of demand biased responses. Please note that the aforementioned does not mean that within subject designs and/or relying on students as experimental units will automatically result in demand biased responses! Once again, it simply implies a greater amount of risk. In addition, there is still some controversy on the plausibility of demand effects due to the selection of students as experimental units. According to Orne (1962), students may inflate theory supportive conclusions by adopting a positive role. Gordon, Slade and Schmitt (1986) while reviewing 32 studies in which students and non-students participated, as subjects, in identical experiments, conclude: " After examining the statistical evidence, it is clear that problems exist in replicating with non-students subjects behavioral phenomena observed in student sample" (Gordon, Slade and Schmitt 1986, p. 200). In the following paragraph we will look at these two issues (the type of design and nature of experimental units).

Our analysis reveals that 20 experiments (7.7 %) relied on within-subject designs and 185 (74.3%) of the published research used students as experimental units. Out of the 20 within subject experiments, only 2 formally cared about demand artifacts a priori and 1 a posteriori. Out of the 185 experiments which used students as consumer proxy, 38 (20.5 %) showed an a priori and 36 (19.4%) an a posteriori concern for demand artifacts. These last two percentages were slightly higher than what was observed at the general population level (16.6 % and 15.8 % respectively).

Prior to drawing any conclusions, let us stress once again that our observations are derived fom an explicit consideration of demand artifacts. Therefore, it may be possible that a researcher, while conducting an investigation, took demand artifacts into consideration but failed to refer to them in the body of the article. This also means that the gatekeepers (reviewers) did not request formal information on demand artifacts to be published. Secondly, our observations only come from published experimentations in the four aforementioned leading academic journals, no information on rejected papers is available. Once more, one might suspect that a large portion of rejections was due to weaknesses in the control of demand artifacts. Nobody will challenge the fact that reviewers accepting and refusing papers are the same people and that there do not exist reviewers specializing in either the acceptance or the rejection of submitted paper. Hence, it is questionable to assume that reviewers overestimate demand artifacts when rejecting a paper and underestimate them when accepting it. Indeed, our results prove that the formal concern with demand artifacts is limited, even in the case of potentially high demand bias. With the previously mentioned limits in mind, we have to acknowledge that demand artifacts do not appear as a crucial concern in published experiments. Finally, if someone still believes that demand artifacts are a prevalent concern in the reviewing processes, our results suggest that gatekeepers should definitely ask authors to provide some basic information on the control of demand artifacts (only a couple of sentences), in order to avoid the academic community of suspecting demand biased effects.

Sawyer (1975) stressed the need to rely on post experimental inquiries (PEI’s) to assess demand artifacts. Although we agree with SHS that such a measurement of hypothesis guessing may be "tricky and potentially invalid" (SHS, page 280) and subject to demand biased responses, PEI’s are the only method of demand artifact measurement which is actually used. Other methods for demand artifact control are highly marginal, which, once more tends to corroborate that demand artifacts are not a major methodological concern in consumer behavior research.

As far as deception is concerned, as stated earlier, we question how and why they are included in an experiment: deception only aims at reducing demand biased responses by masking the true research objective. Consequently, it must not be considered as a casual component in the design of an experiment. If, and only if, the probability of demand artifacts is high, deception should be used, such as, for instance, in experiments where socially desirable measures are involved.

One possible explanation to this situation is that both the definition and measurement of demand artifacts rest on a loose and subjective ground. Up to now, demand artifacts are perceived as a rival explanation to experimental results when units refer to the so-called research hypothesis in a Post Experimental Inquiry (the only method actually used as was pointed out previously). In the following paragraph we will propose a more restrictive, but less ambiguous, definition of demand artifacts. The objective is not to increase the methodological burden but to clarify when, in an experiment, there is a significant probability of demand-biased responses.

DEMAND ARTIFACTS AND HYPOTHESIS GUESSING: TOWARD A NEW DEFINITION OF DEMAND ARTIFACTS

Hypothesis guessing is considered the sole cause of demand artifacts. A consensus seems to exist on this issue, from Orne (1962) to SHS, the ability of experimental units to identify the research hypothesis potentially induces demand biased responses. Coming back to SHS, hypothesis guessing is included in the three conditional probabilities for observing some demand biased responses (see equation 1 above). Of course, adoption of a role by an experimental unit remains a prerequisite to demand biased responses.

Linking demand artifacts to hypothesis guessing is either too restrictive or vague. Indeed, most, if not all, articles referring to the guessing of the hypothesis define the former as "the experiment purpose", in other words, the research question (SHS, p. 275). Such a definition systematically underestimates potential contamination by demand artifacts: whether the research question involves sophisticated constructs or a multi-factor experiment, guessing the research purpose becomes an impossible task, although it does not mean that respondents will not react in a biased manner to experimental manipulations. A consumer may adopt a role which will induce some biased responses without guessing the research hypothesis (see Farber (1963)). Our contention is straightforward: experimental manipulations may induce demand-biased responses, whereas linking demand artifacts to the guessing of the "experimental purpose" is ambiguous and subject to some fluctuating interpretations by researchers.

To understand the relevance of connecting demand artifacts to manipulations, in the following paragraphs we will explore the same two experiments on classical conditioning that SHS used to build up their arguments on demand artifacts. When reproducing Gorn’s research on classical conditioning (Gorn 1982), Kellaris and Cox (1989) found, in a post-experimental inquiry, that only one subject guessed the experimental hypothesis (i.e. the research question: the effects of classical conditioning on consumer preferences). Thus, SHS (p. 277) conclude that "The fact that fewer than one percent of the subjects discerned the research hypothesis is proof positive that demand artifacts could not have influenced findings obtained in this initial experiment." This is a casual but drastic view of demand artifacts which, to a certain extent, bypasses the problem of demand biased responses. Because of a sophisticated research purpose, demand biased responses no longer depend on experimental units (skills, attitudes, inferences on manipulations...) and on experimental design, but on the level of formal sophistication of a research question and the ability of experimental unit to identify and formalize the research purpose in an experimental inquiry (which may itself be demand biased). Indeed, it reduces the occurrence of demand biased responses to what SHS qualify as simplistic experiments and transparent procedures (p. 275). Examples of experimental research which bypass the demand artifact problem simply because of a sophisticated research question, instead of respondents behavior and experimental design are numerous.

For instance, in an experiment similar to Gorn’s research, Nelson, Duncan and Frontczak (1985) investigated " the effects of distraction in a radio commercial on cognitive responses and message acceptance" (sic). To empirically test the causal relationship between distraction and their two dependent variables, they formalized a set of propositions (i.e. hypotheses) such as: "When controlling for message discrepancy, counterargumentation will show a negative relation with attitude change as advocated by the message." To check for some kind of demand biased responses "at the conclusion of the experiment, selected subjects were debriefed to determine whether they had guessed the purpose of the research." Guess what ?... "Results of the debriefing indicated that subjects did not guess the purpose of the study." Actually, only the experimenters could have guessed the purpose of the research. But, if in the postexperimental inquiry a respondent indicated that the purpose of this research was "to see the impact of the music on my ideas", does that mean there is no risk of demand artifacts?

Furthermore, by reducing demand biased responses to guessing the purpose of the research, one could easily conclude that experiments on sophisticated fields of investigation such as the effects of distraction on conditioning and attitude formation are systematically free of demand artifacts.

Bearing these examples in mind, defining hypothesis guessing as the core for demand biased responses is highly dangerous and concealswhat demand artifacts really are: a source of error in measurements resulting from role adoption by consumers involved in an experiment and induced by subject’s reactions to an experimental manipulation, not on the level of sophistication in the formalization of the research purpose. Of course an experimental manipulation is the outcome of a research purpose; it is a necessary but non-sufficient condition for solving the research question. For instance, in research where two factors are manipulated, the purpose of the investigation may be formalized as follows:

PI=F1 + F2 + [F1 X F2] (eq.2)

Within a strict application of the definition of demand artifacts as resulting from the guessing of the research purpose (PI), this means that to be demand biased, a consumer has to identify both factors 1 and 2 (F1 and F2) as well as the interaction between them (F1 X F2). On the other hand, if demand artifacts are connected to the identification of manipulations, as stated in our revised definition, it means that a consumer having identified either F1 or F2 may be suspected of being demand biased. The rationale for supporting this extended definition of demand artifacts is analytically straightforward: when the consumer adopts a role (consciously or unconsciously) derived from the identification of a manipulated factor, it is most likely (depending on the role) that the observed variance in dependent variables resulting from the manipulation of this factor will be demand biased. This is also true for interaction terms involving this factor. Which, consequently, will damage conclusions about the purpose of the investigation. From an analytical point of view, contrasting respondents who identified a manipulated factor from those who did not allows to check for significant differences in observed results. One must keep in mind that demand artifacts are a sticky issue when they introduce some error variance in observations, not only when a respondent identified either a manipulation or the research hypothesis (our new definition vs. the conventional one). Furthermore, measuring the identification of manipulations instead of the experimental purpose is much more straightforward and far less demand induced than asking experimental units to guess the experimental purpose (see Sawyer 1975 for PEI questions).

If we expect that some researchers, while assessing demand artifacts through, let us say, a post-experimental inquiry, decided to extend identification of potentially demand biased respondents to manipulation discerning, then it is all the more certain that several experimentations did not consider factor identification as a potential source of error. For instance, in one of the Kellaris and Cox experiments duplicating Gorn’s pioneering work, it would be very surprising if, in the post experimental inquiry, only one student (out of 299) reported either music influence or pen choice as the purpose of the exercise, taking into account that these variables were not only manipulated but were also subject to some obtrusive measures as far music was concerned (appeal of the music heard was measured on a seven-point scale), and, as stated by the authors, with respect to the choice of pen color (yellow pens wrote in blue and white pens in black)... "it is possible that some subjects may have noticed this difference after they began writing..." (Kellaris and Cox 1989, p. 114). By restricting demand artifacts to hypothesis guessing, researchers adopt a highly conservative approach to error assessment as well as creating some confusion about what hypothesis guessing really means ("guessing the guessing"). W argue that a manipulation identification may initiate a role that will result in demand biased responses. Finally, manipulations in behavioral research are twofold: the experimenter may manipulate either instructions (e.g. research on consumption situation) or events (e.g. the content of a message) (Kerlinger 1988). Special attention should be given to both the control and the assessment of demand biased responses in the case of instruction manipulation as it provides subjects with explicit cues on the nature of the experimental factors and requires experimental units to formally play a role.

CONCLUSION

Our content analysis of the formal concern with demand artifacts in published consumer behavior research proves that it tends to be underreported, even if the selected experimental design increases the probability of demand biased responses (e.g. within subject designs).

Our results shed some light on deception. It should be stressed that deception should only aim at reducing the occurrence of demand biased responses. By no means should it be viewed as a casual experimental practice. Bearing in mind the well-documented literature on risks associated with deception, consumer behavior researchers should question the relevance of deception. The key question becomes: does it really reduce the probability of demand biased responses?

Secondly, we argue that manipulation identification should be considered as the source of potential demand artifacts, not hypothesis guessing. Restricting demand artifacts to hypothesis guessing leaves room for a subjective assessment of the presence or absence of potential demand artifacts, based on the researcher’s decision about when the research purpose was actually identified. This new definition of demand artifacts may be perceived as very restrictive, although it reduces the current ambiguity surrounding detection of demand biased responses.

We expect that this new definition of demand artifacts will by-pass such an ambiguity. Yet, as was mentioned earlier, demand artifacts are a threat to theory construction as long as it provides a rival explanation to experimental findings. Hence, we must not restrict demand artifacts to the identification of either a research purpose or a manipulation,for we must also investigate effects of such an identification on dependent variables (through various techniques such as covariance analysis or partitioning of results). We believe that our approach to demand artifact identification will simplify this task and will remove the inherent subjectiveness in the interpretation of the research pupose by experimental units. We cannot search for objectivity by relying on subjective processes of error identification.

REFERENCES

Berkowitz, Leonard and Edward Donnerstein (1982), "External Validity Is More Than Skin Deep: Some Answers to Criticisms of Laboratory Experiments", American Psychologist, 37 (March), 245-257.

Christensen, Larry (1980), Experimental Methodology, 2nd edition, Boston Mass: Allynand Bacon.

Darley, William K. and Jeen-Su Lim. (1993), "Assessing Demand Artifacts in Consumer Research: An Alternative Perspective" Journal of Consumer Research vol. 30 (December) 489-501.

Farber, I.E. (1963), "The Things People Say to Themselves", American Psychologist, 18, 185-197.

Fiske, Donald W. (1982), "Convergent-Discriminant Validation in Measurement and Research Strategies" in New Directions for Methodlogy of Social and Behavioral Science: Forms of Validity Research, D. Brindberg and L. Kidder eds. San Francisco: Jossey-Bass Published, 72-92.

Gordon, Michael E., L. Allen Slade and Neal Scmitt (1986), "The "Science of the Sophomore" Revisited: from Conjecture to Empiricism", Academy of Management Review, Vol. 11, No 1, 191-207.

Gorn, Gerald J. (1982), "The Effects of Music in Advertising on Choice Behavior: A Classical Conditioning Approach", Journal of Marketing, 46 (winter), 94-101.

Hunt Shelby D. (1993), "Objectivity in Marketing Theory and Research",Journal of Marketing, 57 (April), 76-91.

Kellaris, James J. and Anthony D. Cox (1989), "The Effects of Background Music in Advertising: A Reassessment", Journal of Consumer Research, 16 (June), 113-118.

Kerlinger, Fred (1988), Foundations of Behavioral Research, 3rd edition, New York: Holt, Rinehart and Winston.

Kruglanski, Ani W. (1975), "The Human Subject in the Psychology Experiment: Fact and Artifact", in Advances in Experimental Social Psychology, vol. 8, ed. L. Berkowitz, Orlando, FL: Academic Press, 101-147.

Nelson, James, J. Calvin, P. Duncan and Nancy T. Frontczak (1985), "The Distraction Hypothesis and Radio Advertising", Journal of Marketing, vol. 49 (winter), 60-71.

Orne, Martin M. (1962), "On the Social Psychology of the Psychological Experiment: With Particular Reference to Demand Characteristics and Their Implications", American Psycholo gist, 17 (November), 776-783.

Rosenberg, M. (1969), "The Conditions and Consequences of Evaluation Apprehension", in Artifact in Behavioral Research, eds. R. Rosenthal and R. L. Rosnow, New York: Academic Press, 280-350.

Rosnow, Ralph L. and Leona S. Aiken (1973), "Mediation of Artifacts in Behavioral Research", Journal of Experimental Social Psychology, 9 (May), 181-201.

Sawyer, Alan (1975), "Demand Artifacts in Laboratory Experiments in Consumer Research", Journal of Consumer Research, 1 (March), 20-30.

Shimp, Terence A., Eva M. Hyatt and David J. Snyder (1991), "A Critical Appraisal of Demand Artifacts in Consumer Research", Journal of Consumer Research, vol. 18 (December), 273- 283.

Shimp Terence A., Eva M. Hyatt and David J. Snyder (1993), "A Critique of Darley and Lim’s "Alternative Perspective"!gournal of Consumer Research, vol. 30 (December), 496-501.

Silverman I.. (1971), The Human Subject in the Psychological Laboratory, New-York: Pergamon.

----------------------------------------