Interpreting the Polls

Seymour Martin Lipset, Stanford University
[ to cite ]:
Seymour Martin Lipset (1976) ,"Interpreting the Polls", in NA - Advances in Consumer Research Volume 03, eds. Beverlee B. Anderson, Cincinnati, OH : Association for Consumer Research, Pages: 17-23.

Advances in Consumer Research Volume 3, 1976      Pages 17-23

INTERPRETING THE POLLS

Seymour Martin Lipset, Stanford University

Public opinion research is in both good shape and bad shape. On one hand, it is probably more generally accepted and used by various buyers than ever before. Business, governments, politicians, labor, all now commission surveys before they start a "sales campaign," or to evaluate how they are doing with segments of the public. They want a breakdown of the public's values and interests with respect to what they are selling or doing. The survey has become a major tool of academic research. Government agencies now use samples in lieu of population censuses. Our breathlessly awaited labor force figures are the product of sample surveys.

Yet there are also problems. Just the other day the press carried a story about a situation increasingly familiar to us, the reluctance of the public to be interviewed. The refusal rate has increased. It costs more to find respondents. The same pattern is evident for questionnaire surveys. In discussing the phenomenon, some researchers have raised the question as to whether or not this increased resistance reflects growing public hostility to polling, an increase in the sentiment that it is no one's business what I think, perhaps a linkage of legitimate polling to its abuse by government security agencies or sales firms who have had their agents pose as interviewers to gain access to homes or to secure confidential information. The increased concern about privacy has led some to query whether a polling organization can in fact guarantee anonymity. Some time ago, I had an argument with Vern Countryman of the Harvard Law School who had publicly urged students not to complete questionnaires sent out by the Carnegie Commission on Higher Education on the grounds that any information they supplied concerning their political views or affiliations or their use of marijuana or other illicit drugs might end up in the hands of police agencies. When I challenged Countryman's assumptions, saying this had never happened, that pollsters like physicians, lawyers, or newspapermen would refuse to divulge sources, he contended there was nothing in the law to protect their right to do so, and that he so advised everyone who was asked for an interview or to fill out a questionnaire.

Fortunately, for the survey business, this legal view has not penetrated widely, but still one wonders as to whether or not the combination of the increased suspicion and awareness of governmental and business invasions of personal privacy, combined with the use of the interviewer role as a means of gaining access by salesmen, in the context of an increasingly suspicious world, is responsible for the growing refusal rate. (This reminds me of one definition of the difference between the paranoid and a normal person. The paranoid is a person who lacks the normal person's ability to repress awareness of the hostile behavior of those around him. Perhaps we are all becoming more aware.)

But if the public in the potential role of interviewee is less willing to cooperate, the public in the role of consumer of opinion research is eager to learn. As noted before, various elites very much want to know how the people out there are reacting. The growing numbers of college educated among the population are also desirous of knowing. And as elites have become more concerned with opinion findings, opinion researchers have increased their influence on outcomes. This is not the old issue of the band wagon effect among the electorate, one which has never been resolved, a fact which should mean that the effect in any case is not very great. But where opinion results clearly do matter is in affecting the behavior of decision-makers who change their policies or their strategies because of survey findings.

We can see this today in the current presidential campaign. The published polls have almost made a front-runner out of the most eager non-candidate in history, Hubert Humphrey. Associates of the announced Democratic candidates complain bitterly over the fact that Gallup and Harris run presidential trial heats which include Kennedy and Humphrey on the list of possible candidates. They say that this is unfair, that the pollsters should take these men at their word, that they should limit the choices to those who are overtly in the race. In effect, they are arguing that Gallup and Harris have created a situation which bypasses the normal primary system, that the polls may determine who the Democratic nominee will be, that if published polls were not available the situation would be different.

The new conservative strategy advanced in recent books by William Rusher and Kevin Philips urging the creation of a Conservative Party to replace the Republicans, which underlies some of the enthusiasm behind the campaign for Ronald Reagan, is in large part a reaction to the surveys which show that there are many more self-identified "conservatives" than "liberals" at the same time that self-proclaimed Democrats considerably outnumber Republicans.

Poll results in pre-primary situations, which are inherently much less reliable than in general elections, because of less knowledge and structure about the choices, and the small proportion of the electorate who vote in primaries, have helped to determine the results of the primaries by undercutting the funds and enthusiasm for candidates who appear as also-rans in these surveys. Hubert Humphrey was greatly disadvantaged in the decisive California primary in 1972 by published surveys which showed him further behind McGovern than the actual results suggested he had been. (Parenthetically, I would note that there have been much greater discrepancies between primary election returns and poll estimates in a number of situations, a fact rarely if ever mentioned in newspaper columns presenting pre-primary public preferences. Discrepancies between the anticipations of pre-election surveys and the actual election returns are lowest in Presidential contests where the pollsters report on the opinions of voters at the end of a highly publicized well structured partisan contest which has lasted for months and in which turnout is higher than in other situations. Efforts to "predict" the results of primary contests or of non- partisan mayoralty races have produced gaps of from 10 to 20 percent. In Minneapolis, for example, the Metro Poll published by the Star reported mayoralty candidate Charles Stenvig running third in the elimination primary in 1969. Stenvig actually placed first by a wide margin. In 1973, the Star's poll considerably over- estimated Stenvig's support, seeing him in the lead before the elimination by 10 percent when he lost the race. In Boston, this year, the Globe's mayoralty poll for the preliminary September primary, published two days before the election, reported incumbent Kevin White ahead by 2:1, with a lead of over 20 percent.

White's actual plurality was less than half of this. The Globe decided not to run a poll for the final election. In the November, 1975 elimination primary for mayor of San Francisco, candidate Dianne Feinstein's strategy "to conduct the high tone campaign aimed at issues, not personalities" was, according to the San Francisco Chronicle, "based on polls which showed Mrs. Feinstein comfortably assured of a spot in the runoff." In the election she finished in third place out of the running for the final contest.

In a different context, it may be argued that the sense of malaise in the country, that we are experiencing a "failure of nerve," a crisis of legitimacy or authority, is heightened by the survey data, indicating decreasing confidence in various institutions, the Presidency, the Congress, business, religion, etc. These results encourage the left radical critics much as the preponderance of "conservatives" encourages the right-wingers.

Opinion polls have power to affect what happens. This should make us especially concerned about issues of validity and reliability, and at least equally important about the way results are presented, how they are interpreted. There is, of course, no way of controlling how those who read the results of a survey interpret them, but opinion researchers do have an obligation to point out the way in which different factors may affect the reliability of their findings, e.g., sampling issues, non-respondents, wording of questions, choices given to respondents, the place a question is located, the saliency of issues, the lack of stability of various opinions, etc.

These issues occasioned some public comment recently when the press noticed that Gallup and Harris produced somewhat different estimates of the public's evaluation of President Ford's performance. In a survey released for publication in early December 1974, Gallup reported that in answer to the question: "Do you approve or disapprove of the way Ford is handling his job as President?'' 42 percent approved, 41 percent disapproved and 17 percent had no opinion. Harris' December results for the question: "How do you rate the job President Ford is doing as President--excellent, pretty good, only fair, or poor" found 46 percent giving "positive" replies, 52 percent negative and only 2 percent "not sure." The differences between the two surveys upset some newspaper editors, but it should be noted that such differences had occurred on a number of previous occasions. Thus, Gallup's November 1974 figures for the same question were 48, 32, 20, while Harris' for his formulation were 48, 47, 5. For October, Gallup's reported a large approval majority, 55, 28, 17, while Harris found more disapprovers, 45, 49, 6. This latter difference is quite startling. Gallup indicated the country approved of Ford by 55 percent to 28, while Harris said he was disapproved by 49 to 45. The variations between the two surveys in reporting the percentage who had no opinion are consistently of a considerable magnitude, Gallup's running from 15 to more than 20 percent, while Harris generally come to well under 10 percent.

Variations such as these occurred with respect to estimates of the popularity of Congress as well. In March 1975, Gallup reported 32 percent approved and 50 percent disapproved "of the way Congress is handling its job." Harris indicated that 26 percent had a positive news of "the job Congress has been doing so far this year," while 67 percent were negative, that is negative views outnumbered positive ones in Gallup's data by 18 percent, while according to Harris' results, the difference was 41 percent.

If we look at the public's view of the way the President has handled specific issues, comparable differences were reported by the two newspaper surveys. A Harris survey taken in December 1974 found 60 percent giving President Ford a negative rating for his economic program, as compared to 48 percent negative in a Gallup survey.

The issue of discrepancies between polls has again he-come a matter of considerable importance when the New York Times found in a national survey which it conducted at the beginning of November 1975 that the public, by 55 to 33 percent, favored federal funds to help New York City in its financial crisis. Three weeks earlier, before President Ford's speech opposing such a policy, Gallup had reported that the opponents dominated by 49 to 42 percent. The variation over a three week period between a seven percent plurality disapproving and a 22 percent approval, one occurred in response to answers to the identically worded question: "Do you think the federal Government should or should not provide funds to help New York City get out of its financial difficulties?'' The two surveys, however, employed different interviewing techniques. Gallup's results were obtained from face to face interviews, while the Times queried its respondents by telephone.

Such differences with respect to specific issue questions, have showed up repeatedly, particularly when the question is worded somewhat differently. Thus, in January 1975, Gallup asked respondents to choose between "two plans to reduce consumption of gasoline," Plan A involved rationing, "Each driver would be able to buy up to 10 gallons per week with the price remaining at the amount he or she presently pays;" Plan B contained a price increase, "Each driver would be able to buy as much gas as desired, but the price would be increased by about 10 cents per gallon above what he or she presently pays." Harris, at about the same time asked: "In order to conserve oil, if you had to choose would you have mandatory gasoline rationing, on an odd-even basis with no increase in the price of gasoline, or no rationing, but an 11-cut-a gallon rise in the price of gasoline and fuel oil as a result of the tariff on imported oil from abroad?" Both reported their results under the headings, rationing compared to a price increase or oil imports tax. Gallup found 37 percent for rationing, 48 percent for a 10 cent per gallon price increase, while Harris reported 60 percent for rationing and only 25 percent for an 11 cent per gallon increase in the tax. Much of the difference, of course, probably was the result of the variation in question wording, one defined rationing as limiting purchases to 10 gallons a week, while the other did not mention any limit. But the reports of each as published in the press presented the results in terms of support for rationing versus a price increase, and in these terms, they reported sharply contradictory results.

Again, in December 1974, Gallup asked whether people expect 1975 to be a year of "rising prices or... of falling prices." Harris inquired as to whether "a year from now" you expect prices to be rising more rapidly, less rapidly, staying the same or going down. Gallup found 19 percent expected falling prices, Harris' figure was 4 percent.

The results of the ideological self-identification question which has so impressed William Rusher and Kevin Philips also has been responded to differently in surveys conducted about the same time. Between November 1974 and March 1975, four different polling agencies asked national samples to locate themselves as conservatives or liberals, asking differently worded questions. Each reported varying distributions.

Harris and Gallup found many more self-identified conservatives and liberals while the two university polling centers, the Survey Research Center (S.R.C.) of the University of Michigan and the National Opinion Research Center (N.O.R.C.) of the University of Chicago reported little or no differences in the proportions of the population identifying with each term. In November, Gallup asked: "if an arrangement of this kind, that is two new political parties were carried out, and you had to make a choice, which party would you personally prefer--the conservative party or the liberal party?" The results were 40 percent conservative, 30 percent liberal, and 30 percent undecided. In December, Harris inquired: "How would you describe your own political philosophy, as conservative, middle-of-the-road, liberal or radical?" His findings were 30 percent conservative, 15 percent liberal, 3 percent radical, 43 percent middle-of-the-road and 9 percent not sure. In November also, S.R.C. interviewers told respondents: "We hear a lot of talk these days about liberals and conservatives. I'm going to show you a seven point scale on which the political views that people might hold are arranged from extremely liberal to extremely conservative, where would you place yourself on this scale or haven't you thought much about it?" Conservatives led liberals slightly by 25 to 21 percent, with 26 percent choosing the mid-point position, 21 saying they haven't thought much about it, and 7 giving don't know as their response. The question asked in March 1975 by N.O.R.C. was almost identical to S.R.C.'s with the exception of the fact that they did not include the option, "or haven't you thought much about it." Out of this format their study produced a tie between the liberal and conservative alternatives. Both received 28 percent, with 37 percent placing themselves in the middle and 6 percent responding don't know.

Lest it appear that I am picking on the commercial polls, I should note that one can find additional variations in reports on the same issues in the surveys of the two main academic organizations. In their 1972 omnibus survey, the N.O.R.C. found 20 percent favored "busing of Negro and white children from one district to another," while 77 percent were opposed. S.R.C.'s 1972 survey indicated that only 9 percent supported "busing for integration" as compared to 86 percent for keeping children in neighborhood schools.

One of the organizations, Michigan's Survey Research Center, furnished an interesting set of sharply divergent results in two of their own studies, taken it should be noted two months apart with the 1968 Presidential election intervening. In the pre-election survey, the Michigan interviewers asked a national sample: 'Would you say that people like you have quite a lot of say about what the government does or that you don't have much say at all?" Three quarters chose the option, "don't have much say at all." Two months later, the same issue was presented as an agree-disagree item in the following terms: "People like me don't have any say what the government does." This time only 41 percent agreed or 34 percent less than seemingly had voiced a comparable opinion earlier. A difference of the same magnitude in replies to highly similar questions in the two S.R.C. polls was reported in answer to the questions a) pre-election--"Would you say that politics and government are so complicated that people like you can't really understand what's going on or that you can understand what's going on pretty well," and b) post-election, "Sometimes politics and government seem so complicated that a person like me can't really understand what's going on." Less than half the respondents, 44.5 percent, chose the alternative, "can't really understand" in the first survey, while 71 percent agreed with this view two months later.

Some might suggest that the reason for the varying responses is that many less informed or uncommitted persons are inclined to agree with a statement, and in both cases the change in views resulted from changing the question from an either/or form to agree-disagree. This may be so, but the same surveys also included two other closely related questions which shifted format, but not response distribution. Thus, the S.R.C. pre-election study asked, 'Would you say that most public officials care quite a lot about what people like you think, or that they don't care much at all?" Following the election S.R.C. included the item: "I don't think public officials care much what people like and think." The two formulations produced highly comparable answer patterns. Forty percent chose the "don't care" option in the first study, while only four percent more agreed that "public officials don't care" in the second. And to reiterate the finding and to demonstrate that the sharp variations to the first two sets of questions did not result from disgruntled supporters of the defeated candidates, Hubert Humphrey and George Wallace, changing their minds, it may be noted that questions about voting in the two studies produced almost identical response patterns. Thus, 58 percent chose the option, "voting is the only way that people like you can have a say about the way the government runs things," or an either/or item, while 57 agreed, "voting is the only way" when presented with this view in the agree/disagree format.

The differences in response to these four sets of parallel questions asked two months apart are puzzling to say the least. I have no plausible explanation.

If we turn to the crucial area of foreign policy, the same pattern of response variations reoccurs. In April 1973, Harris found 49 percent opposed the "bombing by U.S. planes in Cambodia." During the same month Gallup's results indicated 57 percent disapproved "bombing Communist positions in Cambodia and Laos." The two most widely published surveys also produced varying results this same year with respect to opinions about the way that the United Stated should treat North Vietnam after the war. In January, Gallup reported in reply to the question: "if a peace agreement is reached should we help rebuild North Vietnamese cities?" That 42 percent were in favor of so doing. In February, Harris inquired as to whether respondents "favor aid to North Vietnam to rebuild war-time damage?" and found only 21 percent support for such a policy. There was a similar lack of consensus in estimating the public's attitude toward reestablishing diplomatic relations with Cuba at the end of 1974. Gallup reported 63 percent in favor; Harris indicated only 50 percent had that view.

One of the sharpest variations in sentiments on an important policy issue was reported over a five month period by the Harris organization. In December 1974, Harris interviewers asked: "There has been a lot of discussion about what circumstances might justify U.S. military involvement, including the use of U.S. troops. Do you feel if (12 different circumstances for various countries) you would favor or oppose U.S. military involvement?'' Only 14 percent favored such involvement "if North Korea attacked South Korea," while 65 percent were opposed. In May 1975, Harris asked a specific question about Korea in the following terms: "The U.S. has 36,000 troops and airmen in South Korea. If North Korea invaded South Korea, we have a firm commitment to defend South Korea with our own military forces. If South Korea were invaded by North Korea, would you favor or oppose the U.S. using troops, air power and naval power to defend South Korea?" Not surprisingly, this wording elicited a much higher positive response for American military participation than the earlier question, but still the variation was staggering, 43-37 percent were for the use of U.S. troops in the second study, as compared to the earlier unfavorable majority of 65-14 percent.

Reports on public opinion toward U.S. involvement in Korea were equally disparate twenty-five years ago during the Korean war. Thus, in December 1950, the Gallup Poll found in answer to the question: "Do you think the United States made a mistake in going in the war in Korea or not" that only 39 percent approved of the war. An N.O.R.C. survey taken in the same month reported 55 percent in favor in response to the query: "Do you think the U.S. was right or wrong in sending American troops to stop the Communist invasion of South Korea?" In April 1951, a repeat by Gallup of his earlier question produced a 43 percent vote in favor, while N.O.R.C. found 63 percent for our intervention in reply to their more emotionally worded question. Two surveys conducted by N.O.R.C. one month apart in 1953 produced a 37 variation in the proportion supporting the war, seemingly as a result of varying the question. In August, they asked: "As things stand now, do you feel the war in Korea has been worth fighting, or not?" Only 27 percent responded positively. But one month later, when N.R.C. used the earlier formulation as to whether "the United States was right or wrong" to have sent troops, 64 percent said the policy was right. Can one speak of the true or real feelings of the American public with respect to Korean policy in the fifties or now when the same polling organizations, presumably using the same sampling frames, interviewers, etc. can elicit such disparate results by varying the wording of the question.

The ability of pollsters to change response patterns and to find sharply different sentiments on what appears to be t-he same issue can also be illustrated with respect to attitudes toward American policies to the Middle East conflict. In presenting the results of a survey taken in December 1974, Harris wrote in a New York Times Magazine article: "Another lopsided 66 to 24 majority favors sending Israel what it needs in the way of military hardware." One month earlier a Yankelovich poll found only 31 percent in favor of the U.S. sending arms to Israel, while 57 percent were against. In January 1975, Yankelovich found 45 percent in favor of military aid to Israel in response to one question, a figure which declined to 28 percent when the question was formulated differently in the same survey. But it must be reported that a Gallup survey also taken in January reported only 16 percent supported military aid of various types for the Jewish state with another 7 percent urging general support. Over half the respondents, 55 percent, gave Gallup interviewers responses which were coded under the heading, "stay out of the conflict." A couple of months later, however, Gallup reported that 54 percent favored either sending military supplies (42 percent) or American troops (12 percent), while only 37 percent opposed American aid to Israel in a renewed Middle East conflagration.

These drastic variations seemingly resulted from the very different way the questions were formulated in the five studies. Harris' December interviewers elicited a 66 percent positive response for military aid to Israel when they asked the question: "As you know, the U.S. has sent planes, tanks, artillery, and other weapons to arm Israel. The Russians have sent similar military supplies for Egypt and Syria. In general, with the Russians arming Egypt and Syria, do you think the U.S. is right or wrong to send Israel the military supplies it needs?" Yankelovich found his 31 percent figure in November in reply to a question about military aid to Israel in the context of queries about a umber of countries: "The U.S. sends arms and military equipment to a number of foreign countries. Do you personally feel that the U.S. should or should not send arms to (country A, B, C, Israel)?" His 45 percent favorable to military aid in January was in reply to "In view of the situation in the Middle East, do you feel that U.S. should increase its present aid to Israel, continue it at the same level as now, or cut it back." Thirty-six percent said, "continue," while percent favored an increase. The much lower 28 percent figure in the same survey was in response to, "Do you favor selling arms and military equipment to both Israel and the Arabs, just Israel, just Arabs, or neither." Fourteen percent said, "Both," another 44 percent, "Just Israel," and almost two-thirds, 63 percent, were opposed to selling arms to either.

Gallup's low report of 16 percent was obtained in January in reply to an open-ended question: '"&hat should the U.S. do if a full-scale war breaks out in the Middle East?" His high estimate of 54 percent occurred in April in answer to the query: "In the event, a nation is attacked by Communist-backed forces, there are several things the U.S. can do about it. What action would you want us to take if Israel is attacked --send American troops, or send military supplies but not send American troops, or refuse to get involved?" These six questions produced percentages in favor of sending or selling arms and/or troops to aid Israel of 66, 45, 31, 28, 16 and 54. And as a final note on this issue, it must be reported that a February 1975 Harris survey found the public opposed to "selling military equipment to (all) other nations" by 53-35 percent.

The problem basically is that public attitudes toward a given issue are usually too complex to be summed up by the responses to one or two questions. People can and do hold what appear to be contradictory opinions on the same subject. This point may be illustrated by reference to the responses to a number of questions on the Middle East situation given by a national sample of American professors in a survey conducted in the Spring of 1975 by Everett Ladd and myself. Almost three-quarters, 74 percent, agreed with the statement, "The U.S. should pursue a more neutral and even-handed policy in the Middle East," of whom 31 percent strongly agreed. Slightly more than half agreed that "The U.S. should apply pressure on Israel to give in more to Arab demands." Two-thirds disagreed with the proposal that "If the United Nations were to vote to expel Israel, the U.S. should withdraw from the U.N. in protest."

Yet over two-thirds of the same national sample of college faculty approved the statement: "The U.S. should continue to supply Israel with weapons and military equipment." When asked to choose among four alternative courses of United States action, "If Israel were attacked by Arab countries and threatened with defeat," less than a quarter, 24 percent, recommended that it "take no military action." A larger group, 30 percent, endorsed either sending "air support" (13) or "U.S. troops if necessary" (17). The remaining 45 percent favored sending "military aid but not U.S. personnel." Over three quarters of the faculty respondents agreed that "Israel has a right to keep the city of Jerusalem as its capital, so long as the Israelis respect the religious rights of Christians and Moslems."

Close to two-thirds, 64 percent, however, thought that "The Arabs should be allowed to set up a separate nation of Palestine on the West Bank of the Jordan." But only 13 percent agreed that "Guerilla activities on the part of the Palestinian Arabs are justified because there is no other way for them to bring their grievances to the attention of the world." This reply, however, did not reflect the large majority's disapproval of violent means for almost two-thirds, 65 percent, disagreed with the statement, "It is wrong for Israel to retaliate against the Arabs whenever Arab guerillas commit an act of terrorism."

Looking at this diverse set of responses, it is apparent that it would have been impossible to sum up the views of this group by a few simple questions designed to locate their general sympathies in the Middle East conflict. Like most Americans, faculty sympathies lie more with Israel (57 percent) than with the Arabs (8 percent), and such sympathies are reflected in support for armed aid for Israel, for its claims on Jerusalem, for its right to retaliate. But the faculty, also by a large majority do not want to see the U.S. involved in Middle East War, and would like to see the tensions reduced. These sentiments give rise to approval of proposals that the U.S. pursue a more "even-handed" policy, that it press Israel to "give in more to Arab demands," that the Arabs be allowed a Palestinian state.

Presenting this example, drawn from my own work, is not designed to suggest that commercial pollsters are naive about the complexities involved in issues such as these or that they have not done comparable research designed to explore issue opinion in depth. The December 1974 Middle East study of the Harris organization, as well as surveys on the same problem by Yankelovich, and various Gallup studies of foreign policy have included a variety of different questions designed to find the parameters of opinion. The problem is not that the research design is unsophisticated but that the published reports or even private ones for clients often simplify the issues, on the assumption that the reader will not understand, or care to know about complexities, that he wants relatively straight-forward and simple answers with respect to attitudes toward federal help for New York, aid to Israel, proportion interested in buying a new car next year, etc.

There are, of course, many other issues involved in interpreting opinion from surveys, than those I have discussed. The picture presented can appear a great deal differently depending on how the answers to the same questions are presented. Thus, a number of polling organizations have inquired as to the confidence the public has in the people in charge of various institutions, medicine, Congress, major companies, the Supreme Court, etc. All the surveys agree that confidence as expressed in responses to these questions has been eroding since the mid-sixties, although again it must be mentioned that the percentages reported giving the same response to the same question for the same institution at about the same time have varied considerably.

One polling organization, the market research division of Procter & Gamble Company, noting the variation in responses to such questions, recently undertook an experiment to see how much expression of confidence in different institutions may be varied by using different terms to describe them. They divided their sample into three groups and asked them whether they "have a great deal of confidence, a moderate amount of confidence, or no confidence in it" for a number of institutions giving each third a different term for the same institution. Some of the results are presented below.

TABLE 1

LEVEL OF CONFIDENCE IN DIFFERENT AMERICAN INSTITUTIONS USING DIFFERENT TERMS

It is clear from looking at these results that a sharply different picture of the level of confidence in different institutions emerges depending on the words which are used to depict them. Fifty percent have a great deal of confidence in established religion but only 35 percent feel the same way about organized religion. Almost two-thirds, 63 percent, are very positive about the "Army, Navy and Air Force," but the high level of confidence declines to 48 percent for the "Military," and drops way down to 21 percent for "Military Leaders." More than a third, 35 percent, express no confidence in "election polls," but only 18 percent have the same negative view of "Public Opinion Pollsters." Twenty-one percent have a great deal of confidence in "Organized Labor," but only 7 percent have the same view of "Big Labor." The responses reported in this table tell us a great deal about the public's sentiments but as important is the fact that they illustrate in detail the instability of such replies, the extent to which it is possible to vary the public's view by changing the way in which institutions are depicted.

It is also important to recognize that one gets a very different image of the confidence level of the country depending on whether a survey organization reports and discusses the proportion of respondents voicing "great confidence," while lumping together those indicating "some confidence" with respondents selecting the "hardly any confidence at all" to represent their point of view. The most widely circulated poll reports published in the press, those conducted by the Harris organization, only list the "great deal of confidence" figure which in recent years has been under 50 percent, often well under that figure, for the leaders of most institutions. And this fact is generally interpreted in accompanying commentary to mean that Americans lack confidence in the leadership of almost all their key institutions. But if we look at the percent of those who say they have "hardly any confidence," a quite different picture emerges. For it turns out, according to an N.O.R.C. 1975 survey, that for all except the leaders in the political realm (the executive) and organized labor, the proportion indicating a lack of confidence runs from a tenth to a fifth, while even for politics and labor, the proportions are all under 30 percent. Or to put the matter another way, from 71 to 92 percent of respondents indicate "some" or "a great deal of confidence'' in various key institutions. Is the confidence glass more full or more empty? The implications of the results are debatable, but Harris in reporting the low proportions voicing "a great deal of confidence" concluded: "In short, there is a leadership vacuum in this country across the board."

A similar point may be made with respect to the interpretation by Gallup of the differences in the public's response between 1971 and 1975 to a question which was cited earlier with respect to Israel. The same question, "In the event a nation is attacked by Communist-backed forces, there are several things the U.S. can do about it. What action would you want to see us take if (a different specific country) is attacked--send American troops, or send military supplies but not send American troops, or refuse to get involved?" was asked about a number of countries. The results were published in The New York Times of May 11, 1975 with the descriptive interpretation that they offered "little evidence" of the much heralded trend toward isolationism among the public. This conclusion is warranted by the fact that there was very little difference in the percentages favoring sending American troops between 1971 and 1975. The average figure dropped only one percent for the seven nations. The picture, however, is quite different with respect to proportions who chose the option that the U.S. should "refuse to get involved." On the average, the non-involvement group increased by 7 percent, with much more substantial changes for some countries, e.g., for West Germany, non-involvement rose from 22 to 33 percent, for Taiwan from 45 to 54, for Turkey, from 37-49. Clearly, the shifts over four years in the proportion saying we should "refuse to get involved," present strong evidence for the conclusion that the Americans have become more isolationist, but Gallup apparently contended that his data refute such a view because the generally small proportions favoring the sending of troops had not increased.

Given the increased influence survey analysts have in affecting policies of businesses, other institutions, journalists, politicians, and the mood of the general public, it is important that the limitations of the instrument be recognized more widely than they are. Much of this is obvious to those who work professionally with survey data, they know their own weaknesses. But like most businessmen, they do not stress the deficiencies of their product to clients. They do not emphasize the complexities involved in analyzing data, or the need often for more expensive research or more detailed complicated write-ups if the client is to understand the state of opinion.

Thus it is obvious, and George Gallup made the point over three decades ago, that it is easy to change responses by presenting a given view in association with positively or negatively valued symbols, goals or persons. By associating a given choice of action with resistance to Communist attack or with the possibility of sending U.S. troops into action, the percent favorable may be varied by as much as 20 percent. Response distribution changed greatly by where a question is located in a schedule. Three academic authorities on survey research, Milton Rosenberg, Sidney Verba and Philip Converse, who examined many of the surveys dealing with Vietnam during the war in their book, Vietnam and the Silent Majority, after reporting large-scale variation in different polls seeking to measure support or opposition to the war or to specific Vietnam policies concluded (1970: 23-24):

One of the reasons why subtle changes in the wording of a question produce different responses is that many of the people whom the pollster questions do not have very well-formed and deeply held opinions on the matters about which a pollster is asking. They are likely to be responding to a question to which they have not given much or any previous thought. What this means is that the wording of the question makes a big difference in how they reply. It also means that the answers that any individual gives can possibly change from day to day. If an individual has not given serious thought to a question, his answer is likely to be offhand. If the pollster were to come back the next day, a somewhat different answer would be obtained... If a question is asked in which negative symbols are associated with withdrawal from the war, people sound quite "hawkish" in their responses. Thus, people reject "defeat," "Communist takeovers," and "the loss of American credibility." On the other hand, if negative symbols are associated with a pro-war position, the American public will sound "dovish." They reject "killings," "continuing the war," and "domestic costs." Turning the matter upside down, we see the same thing. If positive symbols are associated with the war, the American public sounds "hawkish." They support "American prestige," "defense of democracy,'' and "support for our soldiers in Vietnam." On the other hand, if positive symbols are associated with "dovish" positions, the people sound "dovish." They come out in support of "peace," "worrying about our own problems before we worry about the problems of other people," and "saving American lives."

Thus it is possible, even in the same poll, to have the American public sounding like hawks and doves at the same time. At times, these seeming inconsistencies are due to the fact that the positions are not that inconsistent, but at times they are due simply to the alternative question wordings. Lastly, we should point out that many Americans do, in fact, hold inconsistent views on the war. They favor sets of policies that are not compatible one with another... (Rosenberg, Verba, Converse, 1970: 25).

In seeking to understand the considerable variations in response to questions about federal aid to New York City, between the New York Times and Gallup results, cited earlier, Robert Reinhold, a New York Times reporter, noted Gallup and the Times used different interviewing methods, face to face interview and telephone, that "the questions were asked in different context," "'measurement error'--imperfections in the questionnaire or slight differences in the way questions are asked and perceived," refusals, 30 percent of those called by the Times "declined to be interviewed," and "wide public confusion over the complex issue." Such methodological problems, however, are rarely presented in articles reporting on survey findings. Clearly, what occasioned them was the enormous differences in Gallup's results which had been reported on the front-page of the Times on November 2, while the Times' study was still in the field and their own data published on November 5. Since the Times could hardly have presented such discrepant findings without commenting on them, it followed the unusual procedure of devoting considerable space to explaining the possible sources "for an unusually high margin of error" in all opinion surveys.

As a final point, it is also important to note that the opinions of the public even those expressed a few days before an event takes place may be sharply different from their reaction to the accomplished fact particularly if it is one initiated by an important leader figure such as the President. George Gallup pointed this out at the end of the 1930's with respect to reactions to foreign and military policies, conscription, etc., before and after President Roosevelt gave his views on the subject. A more recent similar development has been discussed by Rosenberg and his colleagues:

The role of Presidential prestige and the willingness of the American public to go along with Presidential activities once he has acted can be seen rather clearly in the reaction to President Nixon's decision to send troops into Cambodia at the end of April, 1970. As Presidential actions go, this was perhaps one of the least popular actions of the Indo-China war. Yet the data are most striking.

On the eve of the Cambodian invasion, the Harris Poll asked a sample of the American population how they would feel about the commitment of American troops to Cambodia. Only 7 percent favored sending troops while 59 percent rejected such a commitment. (Twenty-three percent approved the sending of advisers and the rest were undecided.)

What happened a few days later when the President did commit troops?...Despite the very small number who favored sending American troops to Cambodia before the President did just that and despite the skepticism and apprehension of a very large majority after they were sent, when the Harris Poll asked whether Nixon was right in sending troops, more said "yes" than "no." Fifty percent agreed with Nixon's decision while 43 percent bad doubts

These data vividly illustrate the prestige of the President when he acts and the malleability of American opinion. The wide gap between the 7 percent who favored sending troops before they were sent and the 50 percent who approved the President's decision after he had decided to send the troops is a measure of the support he can arouse .... (1970: 26-28).

This discussion is not intended to undermine opinion research, but rather to emphasize the need for greater care. Most issues are quite complicated and as noted earlier, require a number of questions to explore the nature of views, including contradictory opinions on the same issues, which respondents have. Many matters asked of people are basically of little concern to them; they often know little and care less, yet they answer questions. A study of attitudes toward the John Birch Society in the 1960's found that when those who said they approved of the organization were also asked whether they thought it was a leftist or rightist group, that one-third of them described the Birch Society as leftist. But most polls which inquired about attitudes to Birch did not try to discover what image, if any, respondents had of the Society. What is one to make of the fact as found in a 1974 survey that over a third of those who said they preferred George Wallace to Richard Nixon or George McGovern agreed to the item, "I would not vote for a right-winger."

Public confidence in the polls largely rests on the fact that pre-election surveys have a very good track-record with respect to anticipating the reactions of the electorate in choosing between two men, occasionally three in Presidential contests. Fortunately, for the image of election polls, as noted earlier, no one has bothered to check up on their record in forecasting elections in primaries, mayoralty or statewide contests, one which is much less good up to literally yesterday, the 1975 mayoralty elections.

Yet all is not lost, for it seems clear that if our concern is to understand the factors associated with different views or behaviors, rather than their absolute magnitude, that relationships are generally consistent and reliable. Scales of opinion preferences, buying behavior, media habits can be related to demographic and attitudinal variables so we can know what kinds of people are social conservatives or own Volvos. It is possible to analyze what kinds of people will vote for a particular candidate, or what his image is, or that of a product, among different groups in the population and pollsters can specify how hard or soft commitment to different views are. The same questions may be repeated over time, in order to estimate the direction and approximate magnitude of changes in views or behavior.

But it must be reiterated, it is not really possible to know the opinion of the public on most issues, since there is no such opinion. There is at best usually a set of predispositions among many, but locating these do not enable us to predict behavior, or subsequent poll results with any precision.

Given all this, my counsel to all involved in survey research is humility, caution, complexity. They may reject it as ruinous to business, but I do not think making the client aware of the limits of survey findings will lose business. For people, whether businessmen, politicians, editors or the more politicized segment of the public, have an insatiable, uncontrollable need to know something, anything about public reactions. They will pay for research, for reports, for articles, even if they are reminded of the weak needs on which the conclusions rest. And by reemphasizing the instability of many attitudes and preferences of the public, we may help restore the role of judgment, of active leadership in policy, in decision-making rather than the pattern of follower-leadership which is currently so prevalent.

REFERENCE

Milton Rosenberg, Sidney Verba and Philip Converse, Vietnam and the Silent Majority (New York: Harper & Row, Publishers), 1970.

----------------------------------------