Constructs and Perceptions in Public Policy Analysis

William L. Wilkie, University of Florida
[ to cite ]:
William L. Wilkie (1983) ,"Constructs and Perceptions in Public Policy Analysis", in NA - Advances in Consumer Research Volume 10, eds. Richard P. Bagozzi and Alice M. Tybout, Ann Abor, MI : Association for Consumer Research, Pages: 490-491.

Advances in Consumer Research Volume 10, 1983      Pages 490-491


William L. Wilkie, University of Florida


This paper discusses the two papers presented in this session -- Scammon, Mayer, and Bamossy's "The FTC's Participation Funding Program: Perceptions of Applicants," and Omura and Talarzyk's "Shaping Public Opinion: Personal Sources of Information on a Major Political Issue." As suggested by the titles, there is really very little tie between the papers in terms of topic, substance, or approach. I will therefore need to discuss them separately in this commentary.


The Scammon, Mayer, and Bamossy paper is a well-written report of a survey of applicants for public funds to be used to participate and testify in FTC proceedings. Through this program the FTC was attempting to foster the presentation of viewpoints from various sectors in its "Trade Regulation Rule" (TRR) proceedings which dominated much of the agency's activities in the later 1970's. Although I am fairly familiar with the rule-making processes generally, it happens that I have very little familiarity with the participation funding program. I therefore found much of the material in this paper to be new and interesting. I suspect that most members of the ACR audience would share both background and reactions with me in this case.

Given that the Scammon, paper is essentially a descriptive piece on a quite circumscribed sample and set of issues, there is very little in the paper that I would be critical about, or with which I would disagree. at the same time, however, I should report that the brevity of discussion in the paper whetted my appetite for more information in several areas. In the remainder or my brief discussion, therefore, I should like to simply note some of the areas in which I was left with open questions:

(1) Purposes for the Paper: I usually find myself -particularly if reviewing or commenting on a paper -searching out the formal statement of a paper's purpose in order to appreciate the nature of knowledge or information to be transmitted therein. My only criticism of the present paper lies in this area -- the stated purpose appears too limited to adequately portray the rationale for the study or he report. Perhaps if I were better informed about the issues involved in the participation funding program, the basic purposes would have been apparent, but as it stands, they are not. I am left with the feeling that there is a larger purpose than that stated here ("... to present the results of a survey of applicants for intervenor funding and examine the program through their eyes."). At various other points in the paper a larger and more ambitious purpose appeared to be implicit, involving an effort to conduct a partial evaluation of the effectiveness of past funding efforts carried out within this program. Although, as the authors point out, program evaluation is extremely difficult in this setting, this would seem to be a worthwhile objective.

(2) Questions of Time and Timing: Everyone with an interest in public policy is well aware that there have been dramatic swings in regulatory philosophies, politics, and programs between the time that this funding program was authorized (1975) and the present. While this subject may seem beyond the scope of the present report, it is hard for me to imagine that it is not in fact a dominant consideration in any current description or analysis of the public funding program. I was disappointed, therefore, that the paper only barely gave notice to this larger context. In particular, I would have been interested in reading more about two further issues: how the program got started, and why the program seems to have ended.

(2-A) Background on the Early Program: The paper did a rather good job of touching on several key bases of the funding program, including its formal purposes and its distribution of funds across the rule proceedings. Unfortunately -- probably- due to space constraints -- we did not learn about some other salient aspects of the program and its participants (the respondents to this survey). For example, how was the program operated within the FTC? Why did such a high percentage of the applicants receive funding? Was this a controversial program at its inception, or were the critics essentially responding to the administration of the program rather than to the basic concept? With respect to the substance of the program, obviously the inputs were specific to the rules at issue, but I am still somewhat confused as to exactly who was being funded under the program, and what sorts of inputs they were offering. Perhaps an example of one particular rule's experience could have been provided to help with this sort of question.

While this point may again seem to be beyond the scope of the report, I find that I had difficulty appreciating how significant the perceptions of program participants really are, when I have little accurate impression of who these persons or organizations are, what they stand for, or what they contributed to the rule processes. If, for example (and I should stress that I am only posing this as a hypothetical), the funding program had been used as a disguised subsidy for consumer groups (or business groups, for that matter), would we not expect very positive reactions in a survey such as this?

(2-B) Current Status of the Program: The paper contains a footnote reporting, without elaboration, that the FTC has not requested funds for this program in the last three years. It would appear, to the casual observer at least, that the funding program has ended. given my exposure to the FTC, I can understand that this may not be the case, but I would have been very interested to read of the authors' views on this matter. In particular, has the participant funding program been judged either internally or externally as a failure? If so, for what particular reasons? If not, is the program now inactive because of its ties to rule-making processes, which themselves appear to be largely inactive? In either event, moreover, what is the likelihood that the funding program will ever be renewed, and under what sorts of conditions?


The Omura and Talarzyk paper deals with a very interesting topic: the possible impeachment of President Nixon. The paper contains some interesting ideas and findings, is well-written, and communicates its points clearly. While its direct relationship to consumer behavior might be questioned, its explanatory variables (i.e., opinion leadership, exposure to mass media, perceived accuracy of media information, and socioeconomic characteristics) are quite commonly used in consumer behavior studies.

I find that I am in general agreement with most of the premises and conclusions of the paper, so would mention only a few research points which might temper some of the findings without necessarily overthrowing them. These concern two specific issues: the use of a panel survey for the purposes here, and the interpretation of certain results, especially those in the paper's first conclusion:

(1) Using a Consumer Panel Study: The paper stresses, at several points, the study of interpersonal influence and the 'dynamics" of public opinion development and change. These are entirely appropriate concerns for a study such as this, but my feeling is that the authors do not quite go far enough in laying out the difficulties faced when using a one-shot consumer survey for these purposes. They do note that causal inferences are not straightforward. I feel that they should also have pointed out that opinion changes are also difficult to represent, and that representations about trends and futures are also of course quite tenuous.

The method itself would not seem inherently weak in representing interpersonal influences, but this paper does not report any detailed measures of such processes, and the sample would not be likely to contain much interaction among its membership. It could well be, however, that these sorts of measures were included elsewhere in the larger study of which this report seems to be a part.

(2) Reaching Conclusion about Support for Impeachment: The paper's first conclusion is "Had President Nixon not resigned from office, ... opinion in favor of impeachment could have increased ... based on the result that respondents in favor of impeachment rated themselves nigher on opinion leadership and were involved in talking about the topic and trying to convince others of their point of view..."

It is quite plausible to me that the conclusion itself is true, but I am not convinced that it follows directly from the study's results. There are two basic reasons for my concern: the scale used to assign "opinion leaders" here, and the fact that the absolute sizes of the pro- and anti-impeachment groups were not explicitly included in the analysis.

With respect to the first point, the authors followed traditional measurement methods in assessing opinion leadership, and I certainly do not fault them for this (as noted in the paper, the scale follows closely that employed by King and Summers in their 1970 article on fashion opinion leadership). When I began to go through the items, however, I noticed that the nature of issues such as impeachment might deflect some of the measurement intent of the original scale. The first item, for example, asks, "In general, do you like to talk about the presidential impeachment issue with your friends?" Now, I am not sure that "liking" means exactly the same thing for fashion as it does for impeachment. More to the point, it seems reasonable that a person in favor of impeachment might well want to discuss it, and in some senses might enjoy discussing it. A person in favor of the status quo, however, might well feel on the defensive in such discussions, or might well otherwise view them in some negative fashion. It is not surprising, therefore, that these persons would be less likely to report "liking" to discuss the topic. If this is the case, moreover, the second item would also be suspect in the same vein, as supporters of the President report giving less information on impeachment than do opponents.

I have no way of knowing how sensitive the final assignments of overall opinion leadership are to the answers on these two items, but feel that the point might be interesting to Pursue.

My second point involved the fact that absolute group sizes were not explicitly included in the analysis. The paper's conclusion of growing support for impeachment was therefore based upon the proclivities of persons assigned as opinion leaders, rather than their sheer numbers. To provide some perspective on the issue here, the proportion of "high opinion leaders" is heavily weighted toward those who favor impeachment (42% are high opinion leaders) as opposed to those against impeachment (17% are high opinion leaders). However, 62% of the sample was against impeachment, with on v 38;' in favor. This means that there were 91 opinion leaders favoring impeachment, with 60 opinion leaders opposing impeachment. This does not constitute an overwhelming trend, though it is in the direction of the conclusion. The extent to which members of the two groups are likely to interact, moreover, is not available, but would be interesting.

As a final exercise on this point, I calculated the numbers of persons who responded in an "opinion leader" manner on each separate item of the scale. The results are somewhat surprising, to me at least. In brief, they show more persons giving opinion leader responses (as I coded them), on three of the seven items, including number five ("... Would you mainly listen to your friends' ideas or would you try to convince them of your ideas?"). The results here were 79 pro-impeachment/persons reporting attempts to convince, versus 91 anti-impeachment persons giving this response.

In general, then, the conclusion concerning the future swings of public opinion regarding impeachment might be correct, but is not overwhelmingly endorsed by the data provided in the paper.

These are the extent of the criticisms I would offer on this paper. In addition, moreover, there are a number of interesting findings reported, especially those regarding the weak relationships between descriptor variables and peoples' positions regarding impeachment. Each of my pet hypotheses here went unsupported, so I feel that there is considerable information contained in the tables of this paper. In addition, I would concur with the thrust of the authors' remaining conclusions concerning media roles, limitations of sole reliance on socioeconomic descriptors, and the value F of multivariate analyses of issues involving either | political opinions or consumer behaviors.