Polco News & Knowledge

Damema Details: Top Five Survey Research Jargon Terms [Video]

Written by NRC | January 24, 2019

-By Damema Mann-

At National Research Center, Inc. (NRC), we can be guilty of using survey research jargon. While we make every effort to clearly explain what we’re talking about, we also understand that our industry is riddled with specialized terminology. However, you won’t need a glossary to understand our survey reports, nor a master’s degree in sociology to read our emails. To help decode some of the survey terminology used by NRC and our profession, here are five of the most common research jargon terms and what they mean.

Survey Research Jargon

NRC and Product Acronyms

There are plenty of acronyms floating around our office. With long product names like “Community Assessment Survey for Older Adults,” acronyms make our lives a little bit easier. I won’t go through all of them in this article (we have handouts full of acronyms for our new employees), but here are the most important ones for you to know.

Our company name is a great place to begin. We get a good laugh when we are mistakenly called “National Resource Council” (or other close-but-not-quite names), but NRC stands for National Research Center, Inc.

Other acronyms you should know are for NRC’s main line of products: our national benchmarking surveys.

  • CASOA: Community Assessment Survey for Older Adults - CASOA provides the information needed for cities, towns and counties to gauge their readiness to support and appeal to the older demographic. It reveals the strengths and needs of older adults in the community, and helps states and Area Agencies on Aging (AAA) comply with the Older Americans Act.
  • The NBS: The National Business Survey - This survey is designed to help municipalities understand the needs of local business owners and the community economy. It is often used to inform choices about business development.

 

Benchmarking

Benchmarks, or average ratings, are a way to make your survey results actionable and to put them into context. Our benchmarks come from NRC’s enormous national databases and are reported with each of our benchmarking surveys.

By comparing your data to the average ratings of other local governments across the United States, your jurisdiction gets deeper insights. Are your results higher, lower or similar to the average? Resident responses, when compared to the national benchmarks, can indicate successes and areas for improvement.

Benchmarking should not be confused with trend data. If you survey with us more than once, we give you a trends report. This shows how the results have changed over time.

Sometimes we call the benchmarking data “norms,” short for “normative comparison data.” At NRC, “norms,” “normative comparison data” and “benchmarks” are all interchangeable terms.

 

Margin of Error (MOE)

Margin of error (MOE) indicates the survey result’s level of precision. It is derived from the sample size and the number of responses.

If money and time were no object, we would love to send a survey to every resident in your community. But say your city has 50,000 people. It would be extremely expensive to survey all of them and would take a very long time to produce and analyze the results. So instead, we take a sample of the full population and work with that.

NRC surveys a sample of residents and tallies the number of responses to calculate the margin of error. The lower the margin of error, the more precise the results are. NRC surveys are very accurate, designed to yield within a plus or minus five percent MOE. That means we can confidently say the results would still be correct within five percentage points, lower or higher, if we had surveyed every resident.

 

Sample

A sample is a subset of the entire group. Where the “sample frame” is your city or town’s entire population, the “sample” itself is the group we send surveys out to. Our scientific community surveys are sent to a sample of randomly selected households.

 

Cross-tabs (Cross Tabulations)

Cross-tabs are a way to break down data and dig deeper into the results. We most frequently use demographic and geographic cross-tabs.

  • Geographic Cross-tabs - Say you have four districts in your community. We can tag the surveys by where the households are located and break out all the questions by those districts. This allows us to show you how residents responded, grouped by each of those four districts. So for instance, if one district rates the quality of roads much lower than the other districts, this may mean the roads in that area could use extra attention.
  • Demographic Cross-tabs - Demographic cross-tabs are used to break down data and report results by characteristics such as gender, age, race, time spent living in a community (for a resident survey), time spent working at an organization (for an employee survey), and more.

From “benchmarks” to “cross-tabs,” there are lots of ways we can help you dig deeper into your data. If you have questions on any of these survey research jargon terms, or local government surveys in general, don’t hesitate to contact us. We are always happy to talk to you.

 

Related Articles