Share this
Subscribe to Our Newsletter
Featured Report
Download your copy of "Make Informed Decisions with Confidence: Solving The Community Engagement Puzzle" today!
by NRC on September 13, 2016
- By Thomas I Miller -
Despite the contemporary erosion of facts, it’s impossible to run large organizations – private or public – without credible observations about what’s happening and, separately, what’s working. Performance measurement helps with both and can be as deliberate as Baldridge Key Performance Indicators or as impromptu as “How’m I doing?” made famous by once-Mayor Ed Koch’s ad hoc surveys of random New Yorkers. Metrics of success, like compass readings, keep the ships of state on course and because the enterprise is public, make the captain and crew accountable.
Over the years, thought leaders like Ammons , Hatry and Holzer have made the case and offered conceptual frameworks for measuring performance in the public sector, especially with an eye to comparing results among jurisdictions. Across the U.S. and Canada there are scores of jurisdictions that measure and share their performance data. Regional performance measure consortiums (Florida, Tennessee, North Carolina, Arizona, Michigan, Ontario) remain active and ICMA, no longer offering a software platform for sharing performance data, continues “to advocate for the leading practice of creating benchmarking consortia.” All performance measuring consortiums are in roughly the same business - to allow “…municipalities to compare themselves with other participating units and with their own internal operations over time.” Other jurisdictions track their own performance and publish results without the benefit of knowing, or letting others know, how they compare.
For all of these places, measuring performance in the public eye is gutsier than it is complicated, so local governments actively involved in public performance measuring should be lauded for participating in a show and tell that doesn’t always reveal any one place to be best in class or to prove improvement over time. Despite the value of measuring performance, especially when done collaboratively, the percent of jurisdictions actively measuring and publicly reporting performance is a small fraction of the 5500 U.S. cities or counties with more than 10,000 inhabitants – those with enough revenue (probably between $8 million and $10 million) and staff to handle the effort. Across the five consortiums listed above, there only are about 120 jurisdiction participants.
So why don’t more jurisdictions participate in collaborative benchmarking?
The risk of looking bad is no small deterrent but neither are those stringent standards imposed to equate each indicator across jurisdictions. Although measuring performance is neither brain nor rocket science, it does take meaningful staff time to define and hew to extensive collection criteria so that indicators are similar enough to be compared to other places or in the same place across years. For example, police response time sounds like a simple metric, but should the clock start when a call comes in, when the police receive the notice from dispatch, when the patrol car begins to drive to the location, when a non-emergency is logged?
When a large number of indicators is identified for tracking, the staff time to collect them, following rigorous collection protocols, explodes. For example, in the Tennessee Municipal Benchmark Project there are 22 measures captured for code enforcement alone by each of the 16 members as reported in the 426 page annual report for 2015. And the report covers 10 other categories of municipal service in addition to building code enforcement.
We need to lower the barrier to entry and expand the value of participation. The “measure everything” approach (with thousands of indicators) has been found to be intractable and the detailed work required to equate measures remains a tough hurdle. If we choose a small set of indicators that offers a stronger dose of culture (outcome measures of community quality) than accounting (process measures about service efficiencies and costs), we will reduce workload and as a bonus more likely attract the interest of local government purse holders – elected officials.
Imagine, across hundreds of places, a few key indicators that report on quality of community life, public trust and governance and a few that measure city operations. Then visualize relaxing the requirements for near microscopic equivalence of indicators so that, for example, any measure of response time could be included as long as the method for inclusion is described. Statistical corrections then could be made to render different measures comparable. This is what National Research Center does for benchmarking to equate survey responses gleaned from questions asked differently .
Besides the time-cost barriers to entry there have been too few examples of the value of the performance management ethos. We know it’s the right thing to do but we also know that with relatively few jurisdictions collecting common metrics, researchers are hampered from exploring the linkages between government processes and community outcomes. Too often comparisons among jurisdictions become a game of “not it,” whereby staff explain away the indicators that show their jurisdiction to score poorly. When we expand the number of places willing to participate, we will have a better chance of offering a return on investment in performance measurement. With many more participating governments, we can build statistical models that suggest promising practices by linking processes to outcomes.
We can broaden participation in comparative performance monitoring when common metrics are few, targeted to outcomes, easy to collect and proven to matter. It’s a good time to make these changes.
This article originally appeared on the ASPA National Weblog.
Related Articles
Download your copy of "Make Informed Decisions with Confidence: Solving The Community Engagement Puzzle" today!
These Related Stories