Tuesday, November 9, 1999 11:51 PM 
| The Times Poll FAQ
Here are some questions which have been
asked of us frequently over the years (occasionally at high volume), and
our answers. |
| Question
What is a "margin of error"?
What is a "confidence interval"? |
Answer
These two terms -- margin of error and confidence interval
-- are closely related, and are indicators of the "strength" or "truth"
of a statistical number obtained from a sample, such as a polling result.
This is why every reputable survey will include a statement of the margin
of error with the results of the survey.
Ideally, to answer a question like, "Are the voters going to elect
Mr. Alpha or Ms. Beta to be mayor of our city?" one would contact and
ask every voter in the city how he or she intended to vote. Even if all
voters had made up their minds already and would truthfully tell a pollster
their preferences, it is obviously just not possible to interview that
many people. Pollsters instead interview a smaller number of randomly-selected
city dwellers and use standard statistical methods to project their answers
to the rest of the population.
The margin of error and the confidence interval are the expression of
the confidence with which that projection may be made. Typically, a sample
is analyzed with a standard confidence level of 95%, meaning that
95% of the time the answer will lie within the margin of error. (This is
such a standard measure that we usually don't even mention it.) The margin
of error, then, is the range of numbers surrounding the projected figure,
such that we can be 95% confident that the actual number lies within that
range. If this is as clear as mud, read on... an example may help.
Example: Out of a sample of 500 city voters, 45% have said they
will vote for Mr. Alpha, 51% have said they will vote for Ms. Beta and
4% have opted to vote for someone else. The figures seem to tell us that
Ms. Beta is ahead by a fairly solid six points; but can we publish in the
newspaper the news that Ms. Beta is ahead in the election and be sure of
our prediction?
The answer lies with the margin of error. In a survey of 500
people, we can be 95% sure that the value lies within 4 percentage points
of our result. (Don't worry about where this number "4" comes from. It
is a standard calculation that can be found in any basic statistics book.)
We say that this survey has a
"margin of error of plus or minus 4 percentage
points."
What this means
is: Our small sample can predict that Ms. Beta will get between 47% and
55% of the popular vote (add and subtract 4% from 51%). This range of values
is called the
confidence interval. By the same reasoning, the survey
predicts that Mr. Alpha's share of the vote will be between 41% and 49%.
Since the top range of Mr. Alpha's vote (49%) and the bottom range of Ms.
Beta's vote (47%) overlap, this race would be too close to call with a
sample size of only 500.
(The Times Poll's sample sizes typically produce a margin of sampling
error of +/- 3 percentage points.) |
| Question
Why do different polls sometimes get such different results? |
Answer
"Polls are a snapshot in time." This is a cliche, but true. Surveys
are done over a period of days and responses can be affected by such things
as television coverage of events, campaign advertisements and opinions
expressed by people in the news. Two survey organizations never ask the
same questions in the same order of the same people over the same period
of time. Even if they did, their results could vary by several points (see
"margin
of error" above) and still be considered statistically valid.
Sampling error. Polls measure responses to specific questions
and are subject to random and introduced error of many kinds. Survey results
are often discussed by the media as if they are actual numbers when they
are, in truth, measured approximations. Confidence intervals
help analysts account for random sampling error, but not for error introduced
by surveys with leading question wording, order bias or interviewers that
fuel a respondent's natural desire to please.
Differences in question wording and/or context. An example taken
from late Times Poll Director John Brennan's column which ran May 20, 1993,
illustrates the point: On May 6th, 1993, a "Nightline" broadcast noted
that a majority of Americans now supported military action in Bosnia-Herzegovina.
However, the next day's USA Today headline, "55% Oppose Air Strikes,"
sent a completely different message.
How is it that two news organizations had such different perspectives
on public opinion? Examine the different question wordings below. The Gallup
poll question (cited by USA Today) did not mention European allies,
making it sound like the U.S. would be acting alone in carrying out air
strikes. This question wording found only 35% in favor. But when ABC News
asked about support for air strikes
in conjunction with our allies,
65% were in favor. The differences illustrate the importance of question
wording in survey research.
| Gallup/CNN/USA Today question:
"As you may know, the Bosnian Serbs rejected the United Nations Peace
plan and Serbian Forces are continuing to attack Muslim towns. Some people
are suggesting the United States conduct air strikes against Serbian military
forces, while others say we should not get militarily involved. Do you
favor or oppose U.S. air strikes?" |
ABC News question:
"Specifically, would you support or oppose the United States, along
with its allies in Europe, carrying out air strikes against Bosnian Serb
artillery positions and supply lines?" |
| Favor |
35% |
Favor |
65% |
| Oppose |
55 |
Oppose |
32 |
| No Opinion |
6 |
No Opinion |
3 |
| Depends (volunteered) |
3 |
|
|
| Poll of 603 adults nationwide, taken 5/6/93.
Margin of error is +/- 5%. |
Poll of 516 adults nationwide, taken 5/6/93.
Margin of error is +/- 5%. |
|
| Question
Why don't I see the opinions of Asians cited in Times Poll stories
and graphics as often as the opinions of other groups? |
Answer
The Times Poll asks itself this question in a different way: "How
do we produce reliable samples of the different peoples in our multi-racial,
multi-lingual population while still maintaining our rigorous standards
and meeting our deadlines?"
When Asian-Americans or other minority groups' specific opinions are
not cited in a Times Poll story, graphic or stat sheet, it is simply because
the Poll never cites results for subgroups of less than 100 respondents.
Due to the multi-lingual nature of the Asian population and their high
level of community dispersal, it is very difficult to obtain good samples
of the populations living in Southern California. Merely interviewing more
English-speaking Asians in order to have enough in our poll to cite results
would result in OVER-reporting responses of the smaller group who speak
English. This problem is what keeps us from regularly "oversampling" the
Asian population in order to include their answers in our paper.
We have accepted the challenge of surveying this important community
while maintaining good sampling techniques by undertaking a series of polls
conducted in the language of the respondent's choice, each one focused
on a particular Asian subpopulation. We have completed in-depth surveys*
of Korean (poll #267), Vietnamese (#331), Filipino (#370) and Chinese (#396)
groups as of summer 1997. Not only are our results published in The
Times, but the surveys have attracted national attention and we have
made the data available to academic and media analysts all over the country.
*The Stat Sheets for these surveys -- and for most of
the other polls we've done since 1992 -- can be found in the Poll's free
archives
on www.latimes.com. These are PDF files so you'll need the free Adobe Acrobat
Reader software to view/download/print them. It is available on Adobe's
Web site.
|
| Question
...but aren't there as many Asians as blacks in California? |
Answer
The 1990 Census and subsequent projections show that the two populations
are of similar size in California, it is true. The main difference between
the two groups from a polling perspective is the proportion of English
speakers in those populations.
In California,
for example, according to the 1990 Census, 94% of black adults speak English
at home while only 19% of Asian adults speak English at home. Virtually
all (99%) of the black adult population speaks English at home, or speaks
English well or very well, while 78% of Asians fall into that catagory.
This means that the black population is fully represented in our sample,
while we are unable to speak with more than one out of every five Asian
adults that we reach.
|
| Question
Why doesn't the Times Poll conduct online or call-in polls? |
Answer
Online polls are surveys that Internet users or subscribers to
a particular service can participate in by logging their views over the
Internet using a computer. Call-in polls are surveys in which people are
invited to call a phone number (which may or may not be toll-free) to register
their views. In both cases, the results are usually tabulated and made
public in one way or another.
The Times Poll does not conduct call-in or online polls because
the results of these so-called surveys are unreliable. They have several
methodological problems.
Our polls (like all scientifically sound public opinion surveys)
are conducted by first selecting a random sample of people to interview
-- usually based on their telephone numbers -- and then calling each of
them and persuading them to talk to us about the subject that we
are interested in. We carefully monitor the results to be sure that our
sample is representative of the race, educational attainment, regional
distribution, etc., of the population we are sampling. The results of such
a survey can be relied upon to approximate the views of the entire population.
Online and call-in "polls," on the other hand, represent nothing
but the views of those who actually participate in them. Rather than being
a representative sample of the larger population, such surveys tend to
attract those who are especially motivated to respond to a particular issue.
This is known as a "self-selected" sample. Such a group would probably
be weighted toward the sort of people one hears on radio talk shows --
mainly those with strongly held opinions on a particular subject.
Any online survey is also self-selecting in that it is accessible
only to those who have a connection to the Internet or to the online service
provider, which at present excludes a very large portion of the population.
Unfortunately, the media sometimes have a hard time distinguishing
between this type of survey and reputable polls, and the results of each
are often given equal play. There are many good and bad surveys out there,
so it is important that you as a viewer or reader judge cautiously the
statistics being thrown at you.
|
| Question
Why haven't you ever called me or anyone I know? I've lived here
for many years... |
Answer
Here is a thought-experiment to illustrate the answer. First, note that
there are approximately three million people over 18 years old living in
the city of Los Angeles. The Times Poll speaks with approximately 1500
people per survey, and has completed over 400 polls as of the end of 1997.
So we have spoken to over 600,000 people in 20 years of polling. Even if
all
those polls had been conducted in the city of Los Angeles (they weren't,
of course -- they were conducted all over the world), we still would have
spoken only to about 17% of the adult population of Los Angeles. More than
four out of every five people in L.A. would never have been called by the
Times Poll over those 20 years.
So the odds are not very high that we will ever happen to call your
phone number or the phone number of any of your friends. On the other hand,
you have just as good a chance of being called by us as does anybody else
who has a telephone. |
| Question
Everyone I know disagrees with your poll. Did you make up the results? |
Answer
It is easy to believe that the people that you work with, live
near and associate with socially are representative of the country or city
in which you live, but actually most people are surrounded by others who
are more like them in their political beliefs, demographics and personal
opinions than not. That is part of what allows pollsters and the Census
Bureau to sample populations with as much accuracy as we do.
The Los Angeles Times Poll conforms to the standards set
by the American Association of Public Opinion Research and the National
Council on Public Polls. We carefully monitor interviews in progress to
be sure that the questions are asked in a uniform and neutral manner by
our interviewing staff, and we use random-digit dialing techniques to ensure
that everyone with a telephone has an equal chance of being included in
our surveys. (Coverage of the 5% of Americans with no phone in their homes
is a different subject.) In other words, we take great pains to see to
it that the people we interview are a truly representative sample of the
entire adult population. And among all the people we talk to, you can be
sure that a proportionate number of them will share your views on the issues
we ask them about.
|
| Question
Who are these "likely voters" and how are they chosen? |
Answer
Likely voters are an elusive group. Each polling organization
has its own way of defining likely voters, mostly involving past voting
behavior and presently-stated intention to vote. No matter how the group
is defined, the reasoning is the same -- pollsters want to know the likely
outcome of the election and therefore are interested in the intentions
of those voters who will actually go to the polls on election day, not
just in the much larger group of registered voters.
There is no perfect way to select the group of likely voters.
People who fully intend to vote when asked three weeks before an election,
for example, might be kept from voting by some last-minute emergency or
might be turned off by negative advertising in the final days of the campaign
and decide to stay home.
|
Copyright 1999 Los Angeles Times
|