- By Tom Miller -
There are surveys and there are surveys. These days, scientific surveys – ones with unbiased questions, asked of a randomly selected sample of residents, with proper weighting of results to make the sample’s demographic profile and aggregate answers similar to the community’s – compete with cheaper surveys that are offered to anyone on the Internet with a link to survey questions. The inexpensive surveys are called “Opt-In” because respondents are not selected; they choose to come to the survey with no special invitation.
As time crunches and budgets shrivel, the cheap, fast, Web surveys become hard to resist, especially if the results they deliver were pretty much the same as those that come from more expensive scientific surveys. The problem, for now though, is that the results are not the same.
National Research Center, Inc. (NRC) offers, alongside its scientific survey, an opt-in survey of the same content, simply posted on a local government’s website after the trusted survey is done. Not only does the opt-in survey give every resident an opportunity to answer the same questions asked of the randomly selected sample, it gives NRC an opportunity to explore the differences in ratings and raters between the two respondent groups.
Over the last two years, NRC’s research lab has studied how scientific surveys (mostly conducted using U.S. mail) differ from Web opt-in surveys in response and respondents across close to 100 administrations of The National Citizen Surveys™ (The NCS™). NRC is working to identify the kinds of questions and the best analytical weights to modify opt-in results so they become adequate proxies for the more expensive scientific surveys. We are not alone. The American Association of Public Opinion Research (AAPOR) studies this as well, and if you are in survey research but not studying this, you are already behind the curve.
On average, those who opt to take the self-selected version of The NCS on the Web have different demographic profiles than those who are randomly selected and choose to participate. The opt-in respondents have a higher average income than those who respond to the scientific survey. The opt-ins are more often single family home owners, pay more for housing than the randomly selected residents, are under 45 years old, have children and primarily use a mobile phone.
But as noticeable as those differences are across scores of comparative pairs of surveys, the biggest “physical” differences in the two groups come in the activities they engage in. The opt-in cohort is far more active in the community than the group responding to the scientific surveys. For example, those who respond to the opt-In survey are much more likely to:
Even if the people who respond to surveys are from different backgrounds or circumstances, as is clear from the comparisons we made between opt-in and scientific respondents, their opinions may be about the same. Curiously, if we only look at the average difference between ratings given to community characteristics or services, the opt-in and scientific responses look a lot alike. The average difference in ratings across 150 plus questions and close to 100 pairs of surveys amounted to only about 1 point, with the opt-in respondents giving this very slightly lower average rating.
But behind the average similarity lurks important differences. In a number of jurisdictions, there are large differences between ratings coming from opt-in respondents and the scientific respondents. This may be easy to overlook when the average differences across jurisdictions is small.
For example, take the positive rating for “neighborhood as a place to live.” The average rating across 94 jurisdictions for both the opt-in survey and the scientific survey is 84 percent rating as excellent or good. That’s right, for BOTH kinds of surveys. (Not every jurisdiction’s pair of surveys yield the exact same rating, but the average across many jurisdiction pairs reveals this result.)
When we examine each pair of the 94 jurisdictions’ ratings of “neighborhood as a place to live,” 20 of the results are 6 or more points different from each other. In these 20 jurisdictions, ratings of neighborhoods are sometimes much higher from the opt-in respondents. Sometimes it was much higher from the “scientific” respondents.
Imagine that a local government decides to change from its trend of scientific surveys to conduct its next survey using only the opt-in version, and a steep decline in the rating for neighborhoods is found. Given our research on differences in response between opt-in and scientific surveying, we would not be inclined to conclude that the rating difference came from a real shift in perspectives about neighborhoods when the turn could have come from a change in the survey method alone.
If we can determine the right weight to apply to opt-in responses, we are hopeful that the differences we see in our lab will diminish. That way we will be able to encourage clients to move to the faster, cheaper opt-in method without undermining the trend of scientific data they have built. Until then, the scientific survey appears to be the best method for assuring that your sample of respondents is a good representation of all community adults.
A version of this article was originally published on PATimes.org.
Related Articles