Benchmarking Foundation Evaluation: 6 Takeaways from the Latest Study

It’s one of those moments when you realize your love of evaluation has reached true geekdom status. You wait with bated breath for the release of the new Center for Evaluation Innovation benchmarking report. And when you find out it’s been released, you immediately download it and set it aside, taking occasional peeks, until that perfectly quiet moment when you can sit down and read it in a single uninterrupted sitting. (It’s not just me, right?)

What’s so significant about this one report? If you are among the uninitiated, I’m happy to share. This survey offers an ongoing, comprehensive look at how evaluation and learning is practiced within philanthropy. It seeks to understand “evaluation functions and staff roles at foundations; the level of investment in and support of evaluation; the specific evaluation activities foundations engage in; the evaluative challenges foundations experience; and the use of evaluation information once it is collected.” It offers a unique look at how one very powerful swathe of the social sector (i.e., philanthropy) commissions and uses evaluation.

Spoiler alert: It’s not all pretty.

This year’s report is particularly intriguing for two reasons. First, it reveals some interesting trends over time in areas such as staffing, evaluation use, and grantmaking. Second, this year’s survey includes some great new questions such as how grantee and community needs are reflected in philanthropic evaluation practice. Below are six takeaways that stand out as worthy of more sector discussion.

What have we learned about changes in foundation evaluation over time?

The survey has been tremendously useful for understanding the evolution of philanthropic evaluation over time (small n’s from early surveys notwithstanding). Some interesting facts:

1. Growth in program staff appears to be outpacing growth in evaluation staff. The ratio of evaluation to program staff among survey respondents widened from 1:10 in 2015 to 1:16 in 2019, which seems like a fairly big leap. It’s not clear what’s underlying this trend. For example, are foundations becoming more focused in their evaluation efforts and/or outsourcing more work to consultants? Or, could this be a sign that demand for evaluation in philanthropy may be leveling off? The data on perceived declines in evaluation funding relative to program is more consistent with the latter interpretation.

2. Challenges persist when it comes to evaluations generating meaningful insights. When I first glanced at these findings, I was disappointed to see the proportion of foundations still reporting challenges in producing meaningful insights for grantees, funders, and the field. Many of us working in evaluation considered the 2015 report findings a call to action. In fact, these findings were partly what inspired the launch of the Funder & Evaluator Affinity Network (FEAN), an effort to transform how funders and evaluators collaborate around these issues. In looking more closely though, I was pleased to see that the proportion of funders reporting challenges in generating insights for the field and grantees had dropped in rank, suggesting some modest improvement.

3. More evaluation staff have programmatic grantmaking responsibility, but fewer have their own budgets for evaluation-related grantmaking and contracts. Our work with FEAN has really highlighted the need to build greater consultant capacity in philanthropic evaluation across the field. Thus, I was disappointed to see that evaluation staff, on average, are taking on more programmatic grantmaking responsibility (from 42 percent in 2015 to 61 percent in 2019) while simultaneously exhibiting a decline in budgets for evaluation-related grantmaking and contracting (from 79 to 68 percent). I’m curious to better understand what types of grants evaluation staff are making and how much and what types of resources are being directed toward field-building. The evaluation field could greatly benefit from research and development investments around accelerating talent development, advancing equity in evaluation, and creating shared learning spaces for evaluators working in and with philanthropy.

Now how about those new questions?

This year’s benchmarking report explores some new topics, building on conversations that have been taking place in the broader field about diversity in philanthropy, who evaluation benefits, and internal organizational context.

4. As suspected, the majority of evaluation staff are woefully white. This year’s survey included a new question about the race/ethnicity of evaluation staff, and guess what? According to survey respondents, most are white (60 percent). This suggests that there is much room for improvement when it comes to diversifying staff and ensuring they reflect the communities most in need of foundation resources. At the same time, it’s nice to have data that can be used to measure progress on this issue over time. On a related note, FEAN will be sharing recommendations regarded what’s needed to help evaluators of color thrive within the field. Engage R+D will also be releasing results from listening sessions we conducted with evaluators of color in California later this spring to help center their voices in field-wide discussions. Let’s not simply scrutinize this problem, let’s fix it.

5. Grantees and communities continue to be an afterthought when it comes to foundation evaluation. It was upsetting to see that 71 percent of foundations only occasionally or never give grantees and communities the power to shape and participate in the evaluation process. And as in 2015, there continues to be more room to grow when it comes to sharing evaluation findings with grantees (40 percent rarely or never do this) and supporting grantees to conduct their own evaluation (63 percent said that evaluation dollars are included in less than 10 percent of individual grants). Clearly, more needs to be done to de-center foundations as the primary user of evaluation and broaden the benefits of evaluation for grantees and communities. I can’t help but wonder too whether there are connections between this finding and the one above.

6. Foundations experience a heck of a lot of transition. Philanthropy has a reputation for being fickle, periodically changing grantmaking priorities and partners over time. Indeed, over half of respondents reported experiencing major organizational changes in the past three years. This includes things like a new strategic plan or strategy refresh, staff restructuring, changes in program area priorities, and organization-wide diversity, equity and inclusion efforts (DEI). I wonder whether transition taking place within foundations creates barriers—and/or opportunities—to making practice shifts that support greater inclusion of grantees in evaluative efforts and more attention to who communities and the broader field can benefit.

Overall, it’s an interesting read and the findings provide a lot for evaluators working with and within foundations to ponder. What contextual factors influence the design and management of these functions? What will it take to make more progress on identified challenges? How might foundation evaluation staff and their consultants strengthen the way they partner in support of shared goals? I look forward to more discussion on the implications of these findings for the field.