<img height="1" width="1" style="display:none;" alt="" src="https://analytics.twitter.com/i/adsct?txn_id=nzjkn&amp;p_id=Twitter&amp;tw_sale_amount=0&amp;tw_order_quantity=0"> <img height="1" width="1" style="display:none;" alt="" src="//t.co/i/adsct?txn_id=nzjkn&amp;p_id=Twitter&amp;tw_sale_amount=0&amp;tw_order_quantity=0">
Polco News & Knowledge

Old School or New Tech: What Is the Difference with Surveys?

- By Tom Miller -

Old school surveys invite a random sample; new tech surveys allow anyone to opt-in on the Web

There are surveys and there are surveys. These days, scientific surveys – ones with unbiased questions, asked of a randomly selected sample of residents, with proper weighting of results to make the sample’s demographic profile and aggregate answers similar to the community’s – compete with cheaper surveys that are offered to anyone on the Internet with a link to survey questions. The inexpensive surveys are called “Opt-In” because respondents are not selected; they choose to come to the survey with no special invitation.

As time crunches and budgets shrivel, the cheap, fast, Web surveys become hard to resist, especially if the results they deliver were pretty much the same as those that come from more expensive scientific surveys.  The problem, for now though, is that the results are not the same.

NRC and other researchers are examining the differences

National Research Center, Inc. (NRC) offers, alongside its scientific survey, an opt-in survey of the same content, simply posted on a local government’s website after the trusted survey is done.  Not only does the opt-in survey give every resident an opportunity to answer the same questions asked of the randomly selected sample, it gives NRC an opportunity to explore the differences in ratings and raters between the two respondent groups.

Over the last two years, NRC’s research lab has studied how scientific surveys (mostly conducted using U.S. mail) differ from Web opt-in surveys in response and respondents across close to 100 administrations of The National Citizen Surveys™ (The NCS™).  NRC is working to identify the kinds of questions and the best analytical weights to modify opt-in results so they become adequate proxies for the more expensive scientific surveys. We are not alone. The American Association of Public Opinion Research (AAPOR) studies this as well, and if you are in survey research but not studying this, you are already behind the curve.

Respondents to scientific and opt-in surveys are different

On average, those who opt to take the self-selected version of The NCS on the Web have different demographic profiles than those who are randomly selected and choose to participate. The opt-in respondents have a higher average income than those who respond to the scientific survey. The opt-ins are more often single family home owners, pay more for housing than the randomly selected residents, are under 45 years old, have children and primarily use a mobile phone.

But as noticeable as those differences are across scores of comparative pairs of surveys, the biggest “physical” differences in the two groups come in the activities they engage in. The opt-in cohort is far more active in the community than the group responding to the scientific surveys. For example, those who respond to the opt-In survey are much more likely to:

  • Contact the local government for help or information
  • Attend or view a government meeting or event
  • Volunteer
  • Advocate for a cause
  • Participate in a club
  • Visit a park

Responses also differ between the opt-in and the scientific survey takers

Even if the people who respond to surveys are from different backgrounds or circumstances, as is clear from the comparisons we made between opt-in and scientific respondents, their opinions may be about the same. Curiously, if we only look at the average difference between ratings given to community characteristics or services, the opt-in and scientific responses look a lot alike.  The average difference in ratings across 150 plus questions and close to 100 pairs of surveys amounted to only about 1 point, with the opt-in respondents giving this very slightly lower average rating.

But behind the average similarity lurks important differences. In a number of jurisdictions, there are large differences between ratings coming from opt-in respondents and the scientific respondents. This may be easy to overlook when the average differences across jurisdictions is small.

For example, take the positive rating for “neighborhood as a place to live.” The average rating across 94 jurisdictions for both the opt-in survey and the scientific survey is 84 percent rating as excellent or good. That’s right, for BOTH kinds of surveys. (Not every jurisdiction’s pair of surveys yield the exact same rating, but the average across many jurisdiction pairs reveals this result.)

Change in survey methods quote

When we examine each pair of the 94 jurisdictions’ ratings of “neighborhood as a place to live,” 20 of the results are 6 or more points different from each other.  In these 20 jurisdictions, ratings of neighborhoods are sometimes much higher from the opt-in respondents. Sometimes it was much higher from the “scientific” respondents.

Imagine that a local government decides to change from its trend of scientific surveys to conduct its next survey using only the opt-in version, and a steep decline in the rating for neighborhoods is found.  Given our research on differences in response between opt-in and scientific surveying, we would not be inclined to conclude that the rating difference came from a real shift in perspectives about neighborhoods when the turn could have come from a change in the survey method alone.

Data analysts are testing different weighting schemes

If we can determine the right weight to apply to opt-in responses, we are hopeful that the differences we see in our lab will diminish. That way we will be able to encourage clients to move to the faster, cheaper opt-in method without undermining the trend of scientific data they have built. Until then, the scientific survey appears to be the best method for assuring that your sample of respondents is a good representation of all community adults.

 

A version of this article was originally published on PATimes.org.

 

Related Articles

Subscribe to Our Newsletter

featured report

Featured Report

Download your copy of "Make Informed Decisions with Confidence: Solving The Community Engagement Puzzle" today!