Get Social

ctmzcqkwcaatpy7.jpg

When to Follow the Herd

September 29, 2016
Marc Holley
Deciding whether to go with the flow or to deviate from the group

Just because everyone else is doing something does not make it right. We’ve heard this refrain from a young age. But sometimes it makes good sense to follow conventional wisdom or “best practice.” In shaping a foundation’s grantmaking or evaluation approaches, the first step in deciding whether to go with the flow or to deviate from the group is to know exactly where everyone stands. The recently-released exploratory work from CEP and CEI to help define the scope and dimensions of evaluation in the field of philanthropy is tremendously valuable for foundations interested in getting the most out of their evaluation and learning investments.

The report, titled Benchmarking Foundation Evaluation Practices, can help foundation leaders identify the right questions to ask, such as: How many evaluation staff should we have? And, to whom should they report?

At the Walton Family Foundation (WFF), where I head up our evaluation efforts, we have built out our evaluation and learning framework over the past five years. We now have eight full-time evaluation and learning staff (as context, we have 94 total staff and gave approximately $375 million in 2015). Below, I offer some quick observations of noteworthy findings from the report and share some of the questions we are asking.

Some quick positive takeaways:

  • Perhaps the most important function a learning and evaluation staff can serve is to inform strategy both at the outset and mid-course of a project or initiative. So, it is encouraging that 90% of evaluation teams are providing research or data to inform grantmaking strategy, and 87% spend time helping to refine grantmaking strategy during implementation.
  • It is heartening to see that 50% of respondents in the study — senior evaluation or program staff at U.S. and Canadian foundations giving at least $10 million annually, or Evaluation Roundtable members — agreed that funding levels for learning and evaluation have increased relative to program budgets over the past two years. Spending roughly 3% of the budget to learn better how to spend the other 97% makes good sense. The survey data, and our experience at WFF, also suggest that sometimes it can take a little while to demonstrate the value of evaluation investments. But the field is headed in the right direction.

Things to consider further:

  • Half of respondents report doing at least nine different evaluation and learning activities, and ever-expanding roles for evaluation staff have been trending recently, as the Center for Evaluation Innovation et al. noted in 2013. On the one hand, this may suggest that evaluation team members are considered valued colleagues who can contribute across a range of activities. On the other, it may be important for evaluation staff not to be stretched too thin or pulled away from where evaluation expertise is most needed. We love being included, but when should evaluation staff decline to engage on a new task?
  • Foundations can be influential through a number of channels, including convening power and thought leadership. Still, our primary function is to provide grants to support grantees as they drive social and environmental change. So, evaluating at least some of the grants themselves seems important — especially when the funding amount is really large or there’s a pilot project. However, 29% of respondents report not evaluating at the grant level at all, and only 34% say evaluating grants is a top priority. Here’s a point where we appear to break from the herd. At WFF, we have historically prioritized evaluating grants, in part because we think it is hard to understand how we may be contributing to systems change if we don’t know how well our grants are working. In our Education Program alone, we conducted 97 grant evaluations in 2015 (out of 195 new grants approved that year). In the spirit of learning and improvement, we are asking ourselves about the balance of grant evaluations against other types of evaluation we would like to do.
  • 57% of respondents say they disseminate findings externally, but 71% say they don’t do this enough. Sharing lessons learned, including failures, can be time-consuming and delicate. Even when a foundation tries to own failure, it is difficult to communicate the story of a disappointing effort without coming across as criticizing a grantee. And it can also be imprudent to share lessons learned about certain types of grants, particularly advocacy investments. Still, we have greatly benefited when other foundations have shared their lessons learned publicly. We have done a little, but can we do more?

Additional questions:

  • 69% of respondents say they are spending time “improving grantee capacity for data collection or evaluation.” Still, 69% also say their foundations invest too little in this. Our goal is to have evaluation be useful not only for us, but for our grantee partners, too. At WFF, we have created written and video guides to assist grantees, but we know we can improve. And our concerns extend beyond merely technical aspects of measurement. While we often get to a trusting relationship about learning and evaluation after working with grantees for several years, can we modify our approach and move more quickly past a compliance mindset to true partnerships with grantees? In the coming months, we will be reaching out to grantees for advice.
  • How can we better design and facilitate learning processes or events within the foundation? Periodically, we have external researchers join our team to discuss a new study that probes our theory of change. And while program, evaluation, and leadership staff routinely discuss lessons learned based on our grant evaluations, we are with the 48% who say they could do a better job of this. We will be looking to learn from other foundations who are doing this well.

There is no one way to structure a foundation’s evaluation and learning framework to fit the nuanced context and culture of an organization. Having this benchmarking information in hand while weighing the tradeoffs of different approaches is essential.

This post originally appeared on The Center for Effective Philanthropy.

Recent Stories