Five Lessons for Measuring Capacity Building: Evaluating the PropelNext Initiative

by  Mary Gray and Sonia Taddy-Sandino

For decades, grantmakers have recognized the importance of investing in nonprofit capacity to help organizations strengthen their effectiveness and fulfill their missions, yet assessing the longer-term impact of such investments on organizational effectiveness is costly and complicated. As a result, most evaluations have focused on shorter-term outcomes (e.g., knowledge acquisition and skill-building) rather than meaningful but hard to measure impacts such as improved program quality, enhanced organizational performance, and better outcomes for the people nonprofits serve. In partnership with the Edna McConnell Clark Foundation (EMCF) and their California Funder Partnership, we recently explored how an intensive cohort-based capacity-building program called PropelNext is shifting organizational culture and practices in potentially transformative ways.


About PropelNext

In 2012, EMCF launched PropelNext, a signature initiative designed to build the capacity of youth-serving nonprofits to deliver high-quality programs and services that improve outcomes for underserved young people. As part of a cohort, grantees receive common curriculum and capacity-building support from a dedicated team of coaches and consultants during a three-year period. With a combination of an online-learning platform, individualized coaching, peer learning sessions, and financial support, grantees develop theories of change and design and pilot research-informed program models. They also strengthen performance management systems and engage in a test-and-learn cycle that promotes a culture of learning and continuous improvement. (See PropelNext’s Theory of Change.)

The first three cohorts of PropelNext grantees (i.e., National 2015 Cohort, California 2018 Cohort, and Northern California 2021 Cohort) represent a geographically diverse cross-section of 40 organizations working in a range of areas including juvenile justice, foster youth, homelessness, and student re-engagement, and serving youth with significant risk factors, trauma, and other barriers to reaching their full potential. Our recent evaluations of PropelNext capture the journey of grantees as they embed learning and performance management practices into their organizational DNA. The studies also reveal the power of collaborative learning when funders, grantees, consultants, and evaluators openly reflect and learn together.


Lessons Learned about Measuring Nonprofit Capacity

To better understand the impact of its capacity-building investments, EMCF commissioned a post-program study of its inaugural National 2015 Cohort as well as developmental evaluations of its second and third cohorts based in California. The opportunity to evaluate multiple cohorts over the last several years has yielded a number of cross-cutting insights about the program as well as our approach to evaluating capacity-building initiatives. Below are five key lessons that we believe have had the greatest impact on our work.

1. Engage stakeholders in co-design and determining measures of success. We applied a collaborative approach to designing the PropelNext evaluations, engaging EMCF, capacity-building consultants, and grantees to guide and ground the evaluation. During the discovery and design phase, we partnered with EMCF and the capacity-building consultants to identify key learning questions which we revisited at key junctures along the way. We also made time to engage with grantees, learn more about their organizations, and solicit their input about indicators of success. Actively engaging stakeholders – funders, consultants and grantees – in identifying indicators of progress and defining success, led to a more robust design and an opportunity to surface insights and findings we may have otherwise missed.  It also helped to model and promote a cultural of learning, which is what PropelNext is all about! 

2. Focus on shifts in behavior and practice, not just knowledge and skills. Our evaluation team sought to identify observable evidence and proof points where shifts in behavior and practice could be documented over time. We aligned our evaluation studies with the Dimensions for Building a Learning Organization (DBLO) framework, a rubric developed by EMCF and LeadWell Partners to identify progress indicators for the primary intervention. To assess the indicators related to adaptive leadership, talent management, and shifts in organizational culture we leveraged select measures and proof points from Performance Practice developed by the Leap of Reason Ambassadors Community (2017). These frameworks allowed us to go beyond changes in knowledge and skills, to focus on deeper changes in staff behaviors and organizational practices.

3. Gather a 360-degree perspective using mixed-methods. Often, evaluations of capacity-building efforts rely on self-reported interview and survey data from those receiving the assistance directly. Our evaluations went deeper, gathering quantitative and qualitative data using in-depth site visits; document review; meeting observations; and surveys, interviews, and focus groups. What proved to be particularly valuable was gathering information from a variety of stakeholders including grantee leaders, managers, front-line staff, board members, community partners, funders, and from the PropelNext coaches, consultants, and funders involved in the California Funder Partnership at multiple points in time.  As part of the California 2018 Cohort evaluation, for example, we surveyed, interviewed, and observed front-line staff that were one step removed from PropelNext to assess the extent to which key practices were permeating the organization and taking root at various levels.

4. Extend the time horizon for evaluation to capture long-term impact. Shifts in practice and culture take time. Although we do not yet have the ability to use a time horizon of fifteen years like the recent groundbreaking capacity-building study by Lewis Faulk and Mandi Stewart (2016), we were able to follow up two years post-program with the National 2015 Cohort and conduct a retrospective measurement of change over time. Our research revealed marked growth in organizational budgets, the number of youth served, size of data and evaluation staff, expansions in programming, and new program sites. By extending the evaluation time horizon, we were able to capture the full cascade of outcomes throughout and beyond the program.

5. Create a platform for shared reflection. At the heart of developmental evaluation is a focus on learning from implementation and capitalizing on opportunities to share, reflect, and refine programs and strategies as they unfold. Our work included rapid-feedback memos and “sense-making” sessions with the funders, grantees, and coaches throughout the initiative. We regularly shared evaluative themes and insights, and then engaged groups in discussing implications, refinements, and/or actionable recommendations. We also hosted grantee webinars and shared evaluation findings at grantee convenings. We found that this created a sense of ownership, fostered stakeholder collaboration, and enabled all stakeholders to use evaluation to learn, improve, and build evidence for what works.


 We are excited that our evaluation has led to promising findings that contribute to an evolving field and delve deeper into what it takes to optimize nonprofit performance. Through this work, we’ve come to fully appreciate that the road to high performance is a journey, but an intentional focus on organizational learning and continuous improvement has the potential to propel organizations to new heights and strengthen their impact in the communities they serve.