Get Social

measure-ruler-stock.jpeg

Measuring What Matters

January 4, 2016
Marc Holley
Valerie BockstetteCheri Recchia
5 grant performance measurement traps and how to avoid them

These days, foundations, evaluators, and consultants are spending a lot of time advocating new ways to measure progress and deal with “emergence” when working on complex, systems-change initiatives. At the same time, sitting underneath any systems change effort is the day-to-day craft of grantmaking. The very real need to determine how best to allocate foundations’ limited resources requires generating robust performance measures that drive accountability, learning, and impact—for each and every grant.

Performance measures—statements of output or outcome indicators established contractually between funder and nonprofit grantees—should reflect a shared understanding between the grantee, the funder, and the evaluator of what will constitute success. They set everyone up for objective assessment of progress and lessons learned, both along the way and at the end. But it’s important to be aware of and manage the perverse incentives a rigorous performance measurement system can create. Building on the observations of Daniel Stid of the Hewlett Foundation, we have found that when program staff and grantees fall into the five performance measurement traps we’ve identified below, it can compromise the learning process.

The first three traps lead to creating too many meaningless measures, and the last two lead to a reluctance to try and measure what matters most about the work.

Meaningless Measures

1. The Micromanagement TrapSometimes well-meaning program staff and grantees want to lay out every step of the work they need to accomplish during the grant period, in minute detail. Planning is essential, but overbuilding performance measures can lead to losing sight of what really matters. For example, our foundation recently gave a capacity-building grant to a nonprofit organization that trains school principals. The nonprofit had grown to the point that it needed to bring its analysis of its trainees’ impact on school performance in-house. The grant’s proposed deliverables included drafting and advertising a chief academic officer job description, hiring the person, and purchasing new data systems. These were all necessary tasks, but none of these measures could indicate whether the new person was functioning effectively. A tighter measure might read: “By year end, the organization will have increased data analysis capacity, as demonstrated by the posting of a report on the academic performance of schools led by trained principals.”

The number of deliverables should to some extent scale with the size of the grant, but having too many obscures the central purpose of the work. Foundations also need to trust grantees to execute as experts in their domains; it’s not necessary to monitor every single step along the way.

2. The Hedge Trap

Also common is the inclusion of relatively minor performance measures—related to administrative actions, for example—to hedge against the risk of a bad evaluation. Since it’s virtually guaranteed that the grantee will accomplish these minor measures, they balance against more meaningful measures that the grantee may not meet—or so the thinking goes. For example, we were recently working with a grantee to increase its organizational sustainability. The original proposal included numerous measures for processes involved in identifying potential funders, and developing and submitting funding requests. While we are interested in the grantee’s process, we felt that dollars raised and funding base diversification were more meaningful measures, and adjusted accordingly.

Performance measures should be achievable, but also ambitious. We expect that grantees will meet some performance measures, and not meet or only partially meet others. Indeed, if they meet all the measures, we may wonder if the goals were simply set too low. Conversely, if they fall short on everything, we may not judge the grant a failure, especially if we learned important lessons for the work moving forward.

3. The At-Least-It’s-Measurable Trap

“Measurable” and “meaningful” are not the same thing. Often, very easily measurable items are less important than harder-to-measure items. For example, proposed measures related to increasing an organization’s visibility as evidence of its growing influence might be: “Post five times to the organization’s blog, 20 times on Facebook, and 30 times on Twitter about the importance of protecting wildlife habitat and water quality, reaching 2,000, 30,000 and 45,000 people, respectively.” But the importance of these activities and the significance of the targets are unclear. A better deliverable would focus on the intended results of these actions, such as increased organizational membership.

To determine whether performance measures are meaningful, we ask:

  • Are the outputs and outcomes tightly linked?
  • Is it a routine activity that really should not be assessed, or is it an essential product or service worthy of being tracked?
  • Have we prioritized informative deliverables, for which success or failure may lead us to do something different in the future?
  • Are the grantee’s outcomes linked to the ultimate impacts that our foundation is seeking?

Reluctance to Measure What Matters Most

4. The Full-Control Trap

The ultimate goals of our work as funders—whether improved student outcomes or a healthier environment—are usually beyond the direct control of grantees. As a result, grantees can be reluctant to include outcome performance measures related to larger goals. Recently, a grantee working to reform school finance proposed to provide technical assistance to policymakers and generate white papers designed to influence system reform. But the organization was worried that funders would hold it accountable for policy change that resulted from the assistance and recommendations.

Nonprofit organizations cannot fully control many outcomes, including policy improvements. That said, the whole point of providing funding to organizations is to bring their influence to bear on solving difficult problems. It’s not sufficient merely to commit to trying hard. Funders and grantees should feel accountable for their results in the end, and they shouldn’t shy away from bold ideas.

5. The Complexity-Cannot-Be-Measured-Objectively Trap

Many grantees engage in advocacy, and some are piloting new systems-change approaches. These efforts often require shifts in strategies and tactics mid-course, and their complexity and relative unpredictability can trigger reluctance to establish meaningful performance measures. But in our experience, it’s possible to readily adapt rigorous performance measurement to this work. We would argue that it’s essential to plan and assess progress against clearly stated measures, even for work on complex problems. For example, we can measure awareness of and support for policy reform, or grantees’ access to policymakers and opinion leaders. And assessing whether public will is shifting or if a preferred policy solution is gaining prominence on the policy agenda can indicate whether our investments and partnerships are making a difference. It’s also worth noting that establishing rigorous performance measures doesn’t lock grantees into work that no longer makes sense when circumstances change; funders and grantees can work together to amend performance measures during the course of a grant.

There’s sometimes a related concern that it’s not useful to measure small changes in complex systems, because systems-level change requires a long-term effort by many groups whose contributions we can’t disentangle. In some ways, these arguments are valid, and to address these concerns, we have started to apply coalition planning and collective impact assessment tools, in addition to performance measures, to our evaluations. We have not, however, abandoned the useful, objective measurement of work to reform complex systems. For example, in our advocacy investments related to the Gulf of Mexico environmental restoration efforts, we are assessing a network of grantees that have coordinated their efforts on shared goals, but each group still also has unique responsibilities articulated in its own performance measures.

In conclusion, while the rationales leading to these five performance measurement traps are understandable, they can trap us in responding to the wrong incentives. Good performance measurement can be uncomfortable, but it helps both grantees and funders learn what works and what doesn’t, and ideally gain some insight into why.

This article originally appeared in Stanford Social Innovation Review.

Recent Stories