How to know your survey results reflect the entire community.
Local government feedback is often dominated by the vocal minority who show up to city council meetings and frequently call their elected officials. But outspoken, opinionated residents do not always reflect the majority of people. Because of this, representativeness in decision-making and planning is a priority for many local government leaders. Representation means capturing opinions from various demographics that reflect the entire community.
Surveying is an excellent way to reach diverse voices. But surveys that don’t follow best practices likely do not reflect a community any more than the few residents who show up to council meetings. Kobayashi shares what governments should be aware of when looking for representative survey results.
Sample size refers to the amount of people who respond to a survey. Kobayashi recommends a sample size of 350 to 400 residents. A sample of this size has a 5% margin of error, a statistic that states how many percentage points results may be off compared to the true answer. Five percent is the target margin of error in survey science and is even what the Supreme Court considers acceptable.
However, there are a lot of other ways that surveys can be biased that have nothing to do with sample size.
“You could get a sample size of 400, but if it's from a group of folks who are all of one mindset, you actually can completely fool yourself into believing the wrong thing,” Kobayashi said.
A famous example of this blunder is when an esteemed weekly magazine of the early 1900s, The Literary Digest, ended its clean streak of presidential predictions.
The magazine bragged it would settle the 1936 Franklin D. Roosevelt-Alfred Landon election a month before voting began. The magazine surveyed 10 million Americans through automobile registrations and telephone books. The Literary Digest claimed Republican Landon would beat incumbent FDR. The prediction was a surprise to many because FDR was so well liked. In the end, Roosevelt beat Landon in a landslide—523 electoral votes to eight.
So what went wrong? Surely 10 million voters could predict who would be president.
The majority of telephone and car owners at the time were more well-to-do and more likely supportive of Landon’s policies. The Literary Digest fell victim to selection bias, choosing the wrong group of people to participate in the survey. So even though the massive sample size of 10 million was well over the 400-respondent gold standard, it was misguided.
Local governments must be aware of selection bias if they are concerned about representativeness.
The best way to avoid selection bias is by giving each resident an equal chance of responding to the survey, no matter their demographic. This is known as random sampling. There are a few methods that can boost a random sample’s response rate for better representivity.
Random sampling takes a little more effort because it requires more outreach and more strategy to connect with a broad range of people. (This is one reason why other types of surveying are beneficial, even if they are less representative; you can often reach more people faster and they offer supplemental data.) Kobayashi says quality random samples require multiple contacts.
“Contacting people multiple times actually nets you another big group of people who are more reluctant, busier, and have different opinions than the majority of people who are more likely to respond,” she said.
She adds that offering different modes to participate, such as online and mail, tends to increase response rate.
It’s also valuable to focus more outreach toward underheard voices for community survey representation. National data show younger people, males, renters, and people of color traditionally have lower response rates. Knowing this, local governments can invite more people from underheard groups to take a survey and increase their odds of receiving a response.
Contacting community leaders from underheard groups to help spread the word about a survey is another way to reach people who are typically less represented in survey results.
In addition to outreach, weighting data–statistical adjustments to improve the accuracy of results–is sometimes necessary for more representative results.
As mentioned, there are certain groups that are less likely to respond. But you really never know who will or won’t respond to your survey until it’s done. If you were not able to reach enough people from a certain demographic to achieve representivity, then weighting might be necessary.
Representivity isn’t always the top priority for all types of surveying. Maybe you’re simply looking for quick feedback or additional information. But for large-scale community surveys to include in strategic planning, it’s essential that data reflect the opinions of all the people who live within a community.
“You want a representative survey so you can make choices based on an accurate sounding board of your population,” Kobayashi said. “Without representation, local governments risk making misguided decisions that only serve a small portion of their community.”
Polco’s benchmark community surveys follow best practices to ensure responses represent your entire community. For quick surveys, representivity visualizations appear next to results to show you who you are reaching with your engagement efforts. This makes it easier to identify which groups respond at lower levels proportionate to your community.
In addition, Polco also offers automatic weighting to balance any discrepancies you might see and increase the overall representivity of your results. Together, these tools help ensure that your existing and future survey efforts are as representative of your larger community as possible. If you are looking to reach underheard voices in your community, contact one of our engagement specialists.