Share this
Subscribe to Our Newsletter
Featured Report
Download your copy of "Make Informed Decisions with Confidence: Solving The Community Engagement Puzzle" today!
by NRC on January 24, 2019
-By Damema Mann-
At National Research Center, Inc. (NRC), we can be guilty of using survey research jargon. While we make every effort to clearly explain what we’re talking about, we also understand that our industry is riddled with specialized terminology. However, you won’t need a glossary to understand our survey reports, nor a master’s degree in sociology to read our emails. To help decode some of the survey terminology used by NRC and our profession, here are five of the most common research jargon terms and what they mean.
There are plenty of acronyms floating around our office. With long product names like “Community Assessment Survey for Older Adults,” acronyms make our lives a little bit easier. I won’t go through all of them in this article (we have handouts full of acronyms for our new employees), but here are the most important ones for you to know.
Our company name is a great place to begin. We get a good laugh when we are mistakenly called “National Resource Council” (or other close-but-not-quite names), but NRC stands for National Research Center, Inc.
Other acronyms you should know are for NRC’s main line of products: our national benchmarking surveys.
Benchmarks, or average ratings, are a way to make your survey results actionable and to put them into context. Our benchmarks come from NRC’s enormous national databases and are reported with each of our benchmarking surveys.
By comparing your data to the average ratings of other local governments across the United States, your jurisdiction gets deeper insights. Are your results higher, lower or similar to the average? Resident responses, when compared to the national benchmarks, can indicate successes and areas for improvement.
Benchmarking should not be confused with trend data. If you survey with us more than once, we give you a trends report. This shows how the results have changed over time.
Sometimes we call the benchmarking data “norms,” short for “normative comparison data.” At NRC, “norms,” “normative comparison data” and “benchmarks” are all interchangeable terms.
Margin of error (MOE) indicates the survey result’s level of precision. It is derived from the sample size and the number of responses.
If money and time were no object, we would love to send a survey to every resident in your community. But say your city has 50,000 people. It would be extremely expensive to survey all of them and would take a very long time to produce and analyze the results. So instead, we take a sample of the full population and work with that.
NRC surveys a sample of residents and tallies the number of responses to calculate the margin of error. The lower the margin of error, the more precise the results are. NRC surveys are very accurate, designed to yield within a plus or minus five percent MOE. That means we can confidently say the results would still be correct within five percentage points, lower or higher, if we had surveyed every resident.
A sample is a subset of the entire group. Where the “sample frame” is your city or town’s entire population, the “sample” itself is the group we send surveys out to. Our scientific community surveys are sent to a sample of randomly selected households.
Cross-tabs are a way to break down data and dig deeper into the results. We most frequently use demographic and geographic cross-tabs.
From “benchmarks” to “cross-tabs,” there are lots of ways we can help you dig deeper into your data. If you have questions on any of these survey research jargon terms, or local government surveys in general, don’t hesitate to contact us. We are always happy to talk to you.
Related Articles
Download your copy of "Make Informed Decisions with Confidence: Solving The Community Engagement Puzzle" today!
These Related Stories