A Test of the Learning Hierarchy in High- and Low-Involvement Situations

ABSTRACT - The learning hierarchy suggests various sequences of consumer response to print advertisements. Three such sequences are tested in situations of both high and low involvement. The results indicate that learning hierarchies reasonably represent the consumer response in both situations. In particular, a modified version of the learning hierarchy which includes indirect, as well as direct, effects appears to provide the best predictions of choice processes.


George M. Zinkhan and Claes Fornell (1989) ,"A Test of the Learning Hierarchy in High- and Low-Involvement Situations", in NA - Advances in Consumer Research Volume 16, eds. Thomas K. Srull, Provo, UT : Association for Consumer Research, Pages: 152-159.

Advances in Consumer Research Volume 16, 1989      Pages 152-159


George M. Zinkhan, University of Houston

Claes Fornell, The University of Michigan


The learning hierarchy suggests various sequences of consumer response to print advertisements. Three such sequences are tested in situations of both high and low involvement. The results indicate that learning hierarchies reasonably represent the consumer response in both situations. In particular, a modified version of the learning hierarchy which includes indirect, as well as direct, effects appears to provide the best predictions of choice processes.


The learning hierarchy is a simple causal chain model of communication effectiveness and specifies two causal linkages: cognition leads to affect which, in turn. leads to conation. Some version of this hierarchy has been used as a planning tool in advertising for as long as 50 years, and it has been referred to as the "learning hierarchy" for more than ten years. The learning hierarchy is deemed to be most appropriate for explaining consumer responses to advertising when print media are used or under conditions of high involvement. In this context, involvement was originally conceptualized by Krugman (1965) as "the number of conscious 'bridging experiences,' connections, or personal references per minute that the viewer makes between his own life and the stimulus. This may vary from none to many." Thus, it has been hypothesized that the process of communication impact may be different depending upon degree of involvement or the particular media employed.

Krugman's (1965) alternative to the learning hierarchy is the low-involvement hierarchy, which posits that affective development follows, rather than precedes, conative development. Specifically, the low-involvement hierarchy is thought to be most appropriate for broadcast media and low-involvement situations.

In their synthesis of information response models, Smith and Swinyard (1982) suggest certain circumstances under which the learning hierarchy may apply to low-involvement message topics. Specifically, Smith and Swinyard (1982) have pointed out that exposure to an advertisement for a low-involvement product may lead directly to purchase. That is, for a low-risk or low-involvement product, trial involves low consumer costs.

Accordingly, the key questions, as examined in this study, are: 1) for low-involvement situations, can exposure to a print advertisement result in a positive affective, as well as cognitive, response? and 2) if so, is this affective response predictive of behavior? If the answer to both of these questions is affirmative, then the learning hierarchy may be appropriate for explaining consumer responses to advertising in low-involvement situations.


As the above discussion suggests, it may be necessary to modify the original conceptualization of what constitutes the learning hierarchy. At the most basic level, the model suggests that cognition (often operationalized as recall or awareness) leads to affect (or attitude toward the brand); and affect, in turn, leads to conation. This simple hierarchy remains attractive because of parsimony, simplicity, and practical value. However, recent theoretical work suggests that a second dimension of affect--namely, attitude toward the ad--may be useful to include (Mitchell and Olson 1982). For example, attitude toward the ad (ATTa) could be included in the model as a precursor to recall. This follows Moore and Hutchinson's (1983) contention that ads which produce extensive affective reactions may increase attention and thus improve message recall. Intuitively, it seems that ads which arc well liked may also be well remembered.

A second modification of the learning hierarchy involves the addition of indirect effects. For example, there may be a direct link between cognition and conation; that is, advertised brands which are well remembered are likely to be included in a consumer s consideration set. Especially in the case of an impulse item (e.g., low involvement) advertising may lead the consumer directly from awareness to intention. Petty and Cacioppo's (1981) concepts of peripheral and central processing routes are relevant here. Under the central route, issue-relevant information such as brand attributes may be the most relevant for forming purchase intentions; this path is represented in the Simple Hierarchy through an "Attitude toward the brand-behavioral intention" (ATTb < BI) link. Under the peripheral route, purchase intentions may be formed due to non-content cues in the situation. These non-content cues may include facts recalled about the ad which are not highly internalized. Lutz (1979) provides an example of how this peripheral route may be relevant for the recall -> BI link. A particular consumer may drive Hertz Rent-A-Cars not because of salient and positively evaluated attributes of the company (ATTb), but instead because the consumer remembers that O. J. Simpson endorses the company (peripheral route). Although Petty and Cacioppo (1981) developed the theory of alterative routes with attitude change in mind, the theory seems equally relevant to purchase intentions, as the rent-a-car example illustrates.

A third modification involves investigating the multiple paths connecting cognition, affect, and conation. In many formulations of hierarchical models, it is assumed that there are no direct paths between stages which do not follow one another sequentially. However, evidence is accumulating which suggests that some of these out-of-sequence effects may occur. For example, Mitchell and Olson (1982) have found that AlTa might be an important mediator of ATTb. In a similar manner, Shimp (1981) has argued that ATTa may have a direct influence on choice behavior. A third model, then, represents the case where advertising response constructs are allowed to influence one another nonsequentially. At the core may be a simple hierarchy-of-effects model; however, nonsequential paths are also expected to exist simultaneously with sequential ones.


Figure A presents the three models that will be examined in this study; these competing conceptualizations are labeled: the simple hierarchy model, the extended hierarchy model, and a saturated model. Cognition is operationalized as recall of facts about the advertised brand. Affect is divided into two constructs: attitude-toward-the-ad (ATTa) and attitude-toward-the-brand (ATTb). Conation is also separated into two constructs: behavioral intention (BI) and choice behavior (CB). As in past formulations, all paths between the constructs are expected to be positive.

Here, the appropriateness of the learning hierarchy is examined for print advertising, and an effort is made to determine whether or not some version of the learning hierarchy may be applicable to low involvement, as well as to high involvement message topics. Alterative versions of the learning hierarchy are tested against one another; and, in this sense, different theories of consumer response to advertising are set in competition.

Of course, the models represented in Figure A do not exhibit all possible relationships between the constructs. For example, as previously mentioned, it has been proposed that conation could influence affect. However, this possibility is not considered since print advertising is used and- the learning hierarchy seems more appropriate. Additionally, no reciprocal relationships are posited, which is in accord with the previous theoretical work in this area (e.g., Smith and Swinyard 1982). Also, causation is not allowed to flow backward through time. For instance, recall is measured on day after ATTa; so, in this instance, recall is not considered as a cause of ATTa.

In this study, the alternative learning paths are examined in a situation where there are print ads for a new brand in an established product class. Two product classes are employed--ice cream (which is classified as a low-involvement message topic) and cameras (which is classified as a high-involvement message topic). One purpose of the study, then, is to determine whether or not different hierarchies are required to account for the advertising responses that result for high- versus low-involvement product classes.

There are many approaches to involvement. For example, the concept can refer to "issue" involvement--the degree to which the consumer cares about a particular issue or outcome. Alternatively, the concept can refer to involvement with or the importance of a particular purchase situation. In this study, involvement is conceptualized as topic- or message-involvement--the degree to which the consumer "cares about" a specific product or service. Seen from this perspective, a particular product is not inherently high- or low-involving because of some objective criteria, but rather is a function of the consumer's perspective or perception, and can differ across consumers. Thus it is not possible to classify certain product categories, a priori, as eliciting high or low consumer involvement. In this study, high and low-involvement products are defined from a consumer perspective; and Buchanan's (1964) relative involvement scale is used to separate a high-involvement from a low-involvement product on an individual basis. Given this categorization of message involvement, the three hierarchical models shown in Figure A are tested against one another in terms of their ability to explain the sequence and processes of consumer responses to print advertisements.


The participants in the survey were 164 consumers; recruited from a suburban shopping mall in a major metropolitan area. These respondents, after being-identified as potential purchasers of ice cream and cameras, signed up to participate in the sessions. Questionnaires were administered to groups ranging in size from 6 to 11; respondents were paid $10 for their participation after the conclusion of the last session. Fourteen respondents were excluded from the survey due to incomplete or inconsistent responses, and twenty-five respondents were excluded following manipulation checks, resulting a sample size of 125.


The stimulus objects consisted of two print advertisements for new, fictitious brands. Fictitious brands were used to eliminate the effects of prior promotional campaigns. The ads appeared in a booklet along with other advertisements and short articles that might appear in a national news magazine. All ads were designed and produced by a major advertising agency to adhere to realistic standards of quality. The presentation of stimuli was randomly rotated so as to eliminate any ordering effects.


Data to estimate these models were gathered in three sessions. In the first two sessions, subjects were exposed to print advertisements for a group of products, including ice cream and camera brands. The ads were embedded in material that might appear in a magazine. At the start of the first session, Buchanan's (1964) relative involvement scale was administered to ensure that respondents found cameras to be more involving than ice cream. Those who didn't exhibit involvement scores in the expected direction were omitted from the sample. At the conclusion of the second session, ATTa was assessed through the use of an abbreviated version of the Wells (1964) reaction profile consisting of the four items: enjoy, interest, like, and good. In the third session, which occurred one day after the second, questions were administered to operationalize recall, attitude, purchase intention, and choice processes.

Ad recall was operationalized with a test of understanding of the ad, administered the day after it had been seen. Respondents played back what they could remember about the ads, unaided, and then answered five true/false questions about each ad, based on facts contained in the ad. This latter method constitutes an aided measure of message recall and is similar to the approach used in the Ayer model of new products (Claycamp and Liddy 1969).

The empirical variables associated with ATTb consist of four evaluative bipolar adjective scales. This is the method traditionally used by Fishbein and his associates to measure ATTb (Fishbein and Ajzen 1975).

Purchase intention is measured using Juster's (19643 intentions scale (which assesses the probability that a purchase will occur) and with a group of three semantic differential scales concerning behavioral intention. These three semantic differential scales were summed to form a single, composite score.

For ice cream, choice behavior is measured by giving subjects an opportunity to buy one of several ice cream brands using their reimbursement money. Choice of the advertised brand is coded as l; choice of a competing brand or no brand is coded as 0. Since cameras are too expensive for this approach, a simulated purchase, using artificial money, is substituted. In this last respect, conditions were not the same for the two product classes. Neither choice decision is truly natural, but the ice cream condition seems to be more ecologically valid than the simulated purchase condition for cameras.


The increasing application of causal models with latent variables in marketing represents a substantial methodological advance for at least three reasons. It enables (i) explicit modeling of measurement residuals, (ii) better identification and elimination of spurious relationships, and (iii) tests of theoretical relationships by use of latent variables. In this study, three competing models are represented in Figure A. They can all be represented by two general sets of equations, the structural relations and the measurement relations. In terms of predictor specification the structural relations can be written:

E(h/h,x)=b*h+Gx (1)

where, h = (m x 1) is a column vector of unobserved criterion variables; x = (n x 1) is a column vector of unobserved predictor variables; b* = (m x m) is a matrix of criterion coefficients; G = (m x n) is a matrix of predictor coefficients.

The measurement relations are:

y=Lyh+x (2)

x=Lxx+d (3)

where y = (p x 1) is a column vector of criterion measures; x = (q x 1) is a column vector cf predictor measures; Ly = (p x m) is a matrix of regression coefficients of y on h; Lx = (q x n) is a matrix of regression coefficients of x on x; e = (p X 1) is a column vector of endogenous variable measurement residuals; and d = (q x 1) is a column vector of exogenous measurement residuals.


The purpose of the analysis is twofold: (1) to explain choice behavior and (2) test hypothesized causal orders among tie latent variables. The first objective is akin to traditional regression analysis with emphasis on "explained" variance in the dependent variable. The second objective is related to that of factor analysis with emphasis on "explained" covariance. In order to maximize the former and yet model the observed variable-unobserved relationship as specified in Equations, (2) and (3), we need to minimize the trace of y (the variance-covariance matrix of z = h - E(h)), the trace of * (the variance-covariance matrix of e), and trace of qd (the variance-covariance matrix of d). In order to test the causal structure we need to compare the covariance (correlation) matrix of the latent variables with the covariance (correlation) matrix of the latent variables as derived from the causal structure. We ma;;e no assumptions about the causal nature of the observed variable indicators with respect to the structural relations, nor do we make any distributional assumptions. Further, the measures are not treated as alternative measures of the same thing (as in the true score theory) but rather as indicators with some degree of specific variance unrelated to their respective later.t variables. The model and estimation approach that best satisfies the above is Wold's PLS (Partial Least Squares) method.


Following the procedure developed by Fornell et al. (1982), the three models are examined with respect to convergent, discriminant, and nomological validity. The models will also be examined with respect to causal structure. The results for all six models (three low involvement, three high involvement) are summarized in Table 1.




Convergent Validity

Convergent validity can be defined as the degree to which two or more attempts to measure the same construct through maximally different methods are in agreement. That the methods be "maximally different" is an ideal that indicates the rigor of the empirical test rather than a precondition of analysis. The degree of "agreement" among the methods used can be assessed by the average variance shared with a construct Pvc. Thus from Equation (2), the variance shared by the construct hj estimated by different measures Yij is given by:


for L measures of hj and j=1...m and similarly for Pvcxk,k=1...n. A condition for satisfying convergence is that the value of Pvc for a construct be greater than 0.5, i.e., the true variance should at least be greater than the error variance.

An alternate global statistic to compensate for the effect of increasing numbers of constructs is the ratio of the variance shared in the model to the number of measures and constructs:


By this formula, values of M2 will range from 0 to 1 and will be high when measurement error is low and when a minimum number of constructs is specified The value of M2 can also be calculated for any subset of the measurement model.

As summarized in Table 1, all six models tested achieved convergent validity according to the criterion that each construct, on average, share more variance in common with its indicators than it does with error. This fact is also revealed in the measurement model as, in the majority of cases, the correlations between indicator and construct are larger than .85. Thus, from evaluating the rvc statistics, evidence of convergent validity is found for all the models tested.

Further, in all models, M2 is calculated using the first twelve variables. Choice behavior is not included since convergent validity cannot be assessed for this construct with only a single measure available. In brief, M2 represents the ratio of the variance shared in the model to the number of measures and constructs. It is not surprising that the M2 values are similar for the three hierarchical models investigated, since the number of measures and constructs remains constant in all of the models. The high-involvement models seem slightly superior to the low-involvement models (.58 vs. .52) in terms of convergent validity, but the M2 values seem acceptable for both situations.

In summary, the simple hierarchy, the extended hierarchy, and the saturated model all seem to be acceptable in terms of convergent validity. However. results of the convergent validity tests are not sensitive enough to select one of these models as superior to the others; the simple hierarchy may be preferable in the interest of parsimony.

Discriminant Validity

Fornell et al. (1982) demonstrate how PLS can be used to assess discriminant validity, the degree to which a construct differs from other constructs. If the squared correlation between any two constructs is lower than Pvc for a construct, then there is evidence of discriminant validity. That is, discriminant validity is indicated if the variance shared between any two different constructs is less than the variance shared between a construct and its measures.

For all three high-involvement models tested, each of the constructs with multiple measures shares more variance in common with its indicators than with other constructs in the model. In the low-involvement measures, one of the constructs fails this test. Specifically, the relationship between behavioral intention and choice process is stronger than the relationship between behavioral intention and one of its indicators (the composite of the semantic differential measures). The problem may lie in the fact that the choice process follows so closely upon the heels of the BI measure. In a field study, considerable time would likely elapse between forming a behavioral intention and the actual choice. Other variables, such as distribution or availability, would then intervene causing these two constructs to diverge. Here, in a laboratory setting, these measures may be taken too close together to achieve the degree of discrimination that is desired. From another perspective, the problem arises not because of the high association between BI and choice, but because of the relatively low loading between BI and the semantic differential used to measure BI. In fact, more than half of the variance associated with the semantic differential scale is error variance. That- is, the semantic differential approach does not seem as satisfactory as Juster's (1964) probability approach in this instance. One solution would be to delete the summated semantic differential measure, but this would leave only one indicator for behavioral intention. Alternatively, all conative measures could be collapsed into a single construct, but this would inhibit comparisons with the high involvement product where this problem does not arise. Instead, it was decided to retain two conative constructs and to keep both indicators of BI, although one of these indicators appears inferior.

Examination of Explained Variance

Table 1 summarizes the amount of variance explained by the alternative models. For the low-involvement situation, 17% of Recall variance is explained; 28% of ATTb variance is explained; 17% of BI variance is explained; and 78 70 of choice variance is explained. A similar pattern is observed for the simple hierarchy when the high involvement situation is investigated. In sum, the simple hierarchy model is moderately successful.

The amount of explained variance does not increase appreciably when moving from the simple hierarchy to the saturated model. In a like manner, the simple hierarchy appears superior to the saturated hierarchy for the high-involvement product. The amount of explained variance does not appreciably increase with the addition of six new paths. In this instance, the simple hierarchy seems to provide the most parsimonious representation of the data without sacrificing explanatory power.

Nomological Validity

Nomological validity is used here lo mean the degree to which predictions of constructs in the model are verified. Thus, this definition of nomological validity is construct specific and applicable only to endogenous constructs.

Table 1 summarizes the findings with respect to construct validity. First, through an examination of the coefficient of determination (R2) for the final endogenous variable, it seems as if there is not much difference between the three models tested. For the low-involvement product, about 78% of the variance in choice behavior is explained; and for the high-involvement product (his figure is around 58%. There is a tendency for more variance to be explained as more paths are added, but this tendency is very slight. At the most, only a 2% increase in explained variance is gained by adding extra paths.

A similar pattern emerges when examining the root mean R2 (R2) for all endogenous variables in a model. For example, on average, 35% of the variance is explained for the low-involvement endogenous constructs in the simple hierarchy. This figure increases to 36% for the extended hierarchy and increases again to 37% for the saturated model. Similar results are obtained for the high-involvement product. Relatively little explanatory power is gained by adding extra paths.

Finally, the root mean square residual (RSM) is examined. This represents a fit index for causal structure, with lower values indicating better fit. Such an index is not available for the saturated hierarchy since it, by definition, has a perfect fit. Of the two remaining models, the extended hierarchy appears superior, since RSM improves from .074 to .055 for the low-involvement data and from .180 to .067 for the high-involvement data. In short, evidence is found that the extended hierarchy provides a better representation than does the simple hierarchy. This finding appears particularly relevant for the high-involvement situation where the RSM is more than halved by the inclusion of one extra path coefficient.


Summary of Results

Three variants of the learning hierarchy were tested using two product categories designed to differ from one another in terms of topic involvement. The success of the involvement manipulation was checked through an application of Buchanan's relative involvement scale; each subject was allowed to have his/her individualized involvement score for the relevant product categories. All three models tested were acceptable in terms of discriminant and convergent validity. Since the explanatory power of the more complex, saturated hierarchy did not increase appreciably, this model was rejected. Upon closer examination, the extended hierarchy appeared to be superior to the simple hierarchy in terms of fitting the causal structure, as revealed by the RSM index. Both models were equivalent in terms of explaining the variance in choice behavior.

That the saturated model can be rejected has some important implications for communication research. Recently, there has been some uneasiness about the simple hierarchical model, and individual studies have indicated that certain nonsequential paths may be important. Here, however, it appears that, overall, little is gained by adding these extra paths to the basic, simple hierarchy except for the recall-behavioral intention link. The extended hierarchy, which includes this link, is not much more complex than the simple hierarchy; it contains but one additional path.

It is also important to note that choice behavior is more successfully explained for the low-involvement product (R2 = .78) than for the high-involvement product (R2 = .58). One explanation for this may be that respondents perceived the simulated choice task to be more relevant for low- as opposed to high-involving products. Related to this point, it may not take as many variables to explain the choice of a low-risk product, as compared to the choice of a high-risk product. As Smith and Swinyard (1982) have pointed out, an advertisement for a low-involvement product can more directly lead a consumer to trial. Thus, advertising effectiveness measures may be better able to predict trial in a low-involvement situation. And, in this sense, the learning hierarchy may be more appropriate for low-involvement than for high-involvement products

Advertising Effectiveness: Theory and Measures

The results of this study point out some interesting facts pertaining to the measurement and theory of advertising responses. With respect to measurement, it seems that advertising research has made some progress. For example, most measures have low error variances. Similarly, the loadings are uniformly high. The same pattern emerges for both the high- and low-involvement processes. I particular, the measurement of ATTa and Recall seem to be especially refined. Pvc levels for these constructs are particularly high, ranging from .78 to .88. The abbreviated Wells (1964) reaction profile and the aided recall of advertising facts, as developed by Claycamp and Liddy (1969), seem to operate a successful measures of the constructs they were designed to operationalize.

The semantic differential scales used to operationalize ATTb and BI are somewhat less successful, but still appear good enough in that all tests of discriminant and convergent validity are passed for the high-involvement process. The one measure that does operate rather poorly in this data set is the semantic differential scale used to operationalize BI; for both product categories, more error variance is exhibited than variance shared with the measured construct. In contrast, the Juster (1964) scale used to operationalize BI through a probability approach appears to be much more satisfactory and correlates very highly with the construct it is designed to measure. This scale certainly deserved further attention in the marketing literature.

Unfortunately, most of the scales described above are particularly well suited for laboratory studies but appear to be of limited value for more naturalistic, field surveys. This, in turn, limits the advancement of advertising theory. For example, Krugman's (1965) low-involvement hierarchy is rarely tested, partly because of the difficulty of measuring consumers' in-store reactions. The low-involvement hierarchy suggests that a conscious perception of the advertising message does nor take place until the consumer is at the moment of purchase. Measurement in the store, at the moment of purchase, remains a problem despite the fact there is increasing interest in monitoring in-store behavior.

It is in this sense that advertising measurement procedures lag behind advertising theories and impede theoretical advancement. Nineteen years after Krugman first introduced the low-involvement hierarchy, his theory remains popular but largely untested. With the introduction of electronic scanner data and other in-store procedures, we may be in a better position to assess and advance theories about how advertising works.


Buchanan, Dodds I. (1964), "How Interest in the Product Affects Recall: Print Ads vs. Commercials," Journal of Advertising Research. 4 (No. 1), 9-14.

Claycamp, Henry J. and Lucien E. Liddy (1969), "Prediction of New Product Performance: An Analytical Approach," Journal of Marketing Research, 6 (November), 414-20.

Fishbein, Martin and Icek Ajzen (1975), Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research, Reading, MA: Addison-Wesley.

Fornell, Claes, Gerard J. Tellis, and George M. Zinkhan (1982), "Validity Assessment: A Structural Equations Approach Using Partial Least Squares," in An Assessment of Marketing Thought and Practice (Bruce Walker, ed.), Chicago: American Marketing Association, 405-09.

Juster, F. Thomas ( 1964) Anticipations and Purchases: An Analysis of Consumer Behavior, Princeton: National Bureau of Economic Research.

Krugman, Herbert E. (1965), 'The Impact of Television Advertising Involvement," Public Opinion Quarterly, 29 (Fall), 349-56.

Lutz, Richard J. (1979), "A Functional Theory Framework for Designing and Pretesting Advertising Themes," in Attitude Research Plays for High Stakes, eds., J. Maloney and B. Silverman, Chicago: American Marketing Association, 53-73.

Mitchell, Andrew and Jerry C. Olson (1982), "Are Product Attribute Beliefs the only Mediator of Advertising Effects on Brand Attitude?" Journal of Marketing Research, 18 (August), 318-32.

Moore, Danny L. and J. Wesley Hutchinson (1983), 'The Effects of Ad Affect on Advertising Effectiveness," in Advances in Consumer Research Vol. X (R. P. Bagozzi and A. M. Tybout, eds.), Ann Arbor: Association for Consumer Research.

Petty, Richard E. and John T. Cacioppo, (1981), Attitudes and Persuasion: Classic and Contemporary Approaches, Dubuque, Iowa: William C. Brown Co.

Preston, Ivan L. (1982), 'The Association Model of the Advertising Communication Process," Journal of Advertising, 11 (No. 2), 3-15.

Shimp, Terrence (1981), "Attitude Toward the Advertisement as a Mediator of Consumer Brand Choice," Journal of Advertising, 10 (No. 2), 9-15.

Smith, Robert E. and William R. Swinyard (1982), "Information Response Models: An Integrated Approach," Journal of Marketing, 46 (Winter), 8192.

Wells, William (19643, "EQ, Son of EQ and the Reaction Profile," Journal of Marketing, 28 (4), 45-52.



George M. Zinkhan, University of Houston
Claes Fornell, The University of Michigan


NA - Advances in Consumer Research Volume 16 | 1989

Share Proceeding

Featured papers

See More



Evan Polman, University of Wisconsin - Madison, USA
Sam J. Maglio, University of Toronto Scarborough

Read More


F10. Food Waste: On the Normalization of Structural Violence

Andreas Plank, Privatuniversität Schloss Seeburg

Read More


M12. From the Occult to Mainstream – Tracing Commodification of the Spiritual in the Context of Alternative Spiritualities

Richard Kedzior, Bucknell University

Read More

Engage with Us

Becoming an Association for Consumer Research member is simple. Membership in ACR is relatively inexpensive, but brings significant benefits to its members.