- By Tom Miller -
Politicians voice what citizens want because elected representatives’ connection to voters makes for accepted policies and laws. Marketers create what patrons want because vendors only stay in business by being in close touch with customers. Government managers deliver services the public wants because their job is to make communities that fit resident desires. None of these leaders can be sustainably successful without accurate assessment of public opinion.
These days everyone with access to software that “engages the public” or dials a phone can call themselves a surveyor. But leaders need to feel confident that the public opinion they pay for embodies valid results. As with high fidelity speakers, it can take a real aficionado to distinguish top quality from elaborate promises, and few leaders are connoisseurs of survey research. So there are shortcuts that leaders rely on to determine if the survey they commission will give accurate findings. Local governments may ask: is this a “scientific” survey? Does the survey report a margin of error? Does the firm use the latest technology to collect the data?
These aren’t the worst proxies for determining survey quality. But you can do much better. To discern if a survey will yield reliable results, here are ten clues to look for:
You want a survey with results that reveal the same opinions as those of the entire population. Among their own, real survey scientists don’t refer to quality surveys as “scientific.” The term is too broad and really refers to a way of looking at the world that is systematic and methodical. Bad surveys can be both, following ill-conceived systems and bad methods. But the term “scientific” is used by good and bad survey researchers mostly because it is a phrase we all know that clients think stands in for “accurate.” As a consumer, it’s better to say you want a “valid” survey.
Surveys by mail and phone get to more than 90% of U.S. households in every state (from a low of 91% telephone coverage in S. Carolina, to 98% in N. Dakota). The US Postal Service delivers to just about every household in America. Internet access, by contrast is active in only 84% of homes on average across the county. So an inclusive community survey is one that is accessible to the largest amount of residents.
Furthermore, inclusivity requires giving potential respondents multiple chances to participate so that those who are too busy or less involved have a better chance to take the survey. Opinions of those who respond after a second reminder can have different demographic characteristics than those who respond at the first prompt. Inclusivity may also mean surveying in multiple languages so that those more reluctant to respond in English can still get their opinions heard.
An illustration that your survey is valid includes the similarity of the demographic profile of the respondents and the public they should represent. If the survey results are based mostly on homeowner’s opinions but most of your residents are renters, it’s fair to be suspicious.
Related to a representative survey, rarely do the demographics of survey respondents mimic those of the public at large. Survey respondents more often are homeowners, older and white. Getting to a more representative survey requires statistical weighting that modifies the survey respondents’ demographic profile to match more closely what is known about the community’s makeup. By weighting, we bring the participants into the right proportion for the sample.
A survey with too many surprises may have missed the mark. Your own experience of your community often should be the touchstone of a survey’s validity. It may seem a paradox that you need a survey to tell you what you already know, but surveys offer a precision and quantification that give you the ability to track changes that your intuition cannot. A few surprises keep your gut in check, however, so this is not to say that there should not be any surprises at all.
In principle, it’s easier to acknowledge that a survey should be unbiased than it is to author questions that are balanced, fair and do not embed local jargon. Many stakeholders are suspicious of surveys conducted by the organizations that are being evaluated in the questioning. So questions without slant, often created by outside parties, are important demonstrations of fairness.
When the selection of respondents is in the hands of a researcher, the chances are far greater that survey results will reflect the opinions of the entire population than when the door is open for anyone to participate. Invitations should be delivered “probabilistically,” giving all residents living in community households a known chance to participate. For example, NRC researchers systematically select households without bias to respond to a community survey, thus every resident has a specified probability to be invited to take the survey. Results of surveys with probabilistic data collection are reported within a range of uncertainty, since the survey participant responses are stand-ins for a broader public whose opinions cannot– FOR CERTAIN - be completely known. This range is referred to as the “margin of error.”
Non-probability surveys have their place, but their strengths and limitations should be understood before focusing too much on their typically lower price.
“Outreach” and “input” offer important opportunities for residents to participate in local government, but merely opening the door to resident ideas is not the purpose of rigorous survey research. Daily public engagement activities should serve different goals than what a valid resident survey is designed to achieve. Don’t expect input from an engagement platform to deliver trustworthy opinions that represent the entire adult community where you live.
When it comes to giving the most accurate results, survey mode matters. Telephone surveys and web surveys have response rates typically in the single digits. Web-only surveys don’t cover all segments of the community equally. Phone surveys are expensive and garner unduly sunny evaluations. Mailed surveys can create probability samples, get more candid responses and are less expensive than phone surveys. Mailed surveys also have response rates usually two to four times higher than phone. So, there definitely are times to choose data collection by phone or web, but local governments should consider mail first.
Too many survey researchers fail to follow the national guidelines for divulging the way they conduct their surveys. Not only is this failure a clue that the researcher may not be the right one for your job, but without well-described methods, the client has less flexibility in creating a new survey intended to show trends across time. And when it comes time to confidently act on the survey results, local governments especially need a survey with methods they can refer to, trust and make publicly available.
Related Articles