Promise Monitoring and Evaluation Framework

by Jennifer Iriti & Michelle Miller-Adams 

Overview 

Promise programs across the country vary significantly in their broad purposes, but tend to have at their core one or more of the following three goals: 

  • Catalyze improvements in the pre-K-12th grade system; 

  • Improve students’ postsecondary enrollment, persistence, and degree attainment; and/or, 

  • Stimulate economic and community development. 

Despite these variations in program goals, many Promise leaders, funders, and stakeholders share an interest in monitoring and evaluating the progress of the program. This tool was developed in 2016 by Jennifer Iriti of the University of Pittsburgh’s Learning Research & Development Center and Michelle Miller-Adams of the W.E. Upjohn Institute, with generous support from the Lumina Foundation. The goal is to help those considering Promise programs or those with established programs think about how to measure progress and impact. The authors drew on their own work in our respective Promise communities as well as the Promise research base to provide a consolidated summary of approaches to measurement and specific indicators that could be tracked. 

The overall Monitoring and Evaluation Framework includes information and resources for the following important components of developing a useful measurement plan for a Promise program, presented in sequential order of how one might go about creating such a plan. 

Articulating the program theory of change 

As place-based initiatives, Promise programs are intended to be more than simple scholarships. The goals of most Promise programs generally fall into three categories: 1) creating or supporting a college-going culture in the pre-K-12th grade setting, 2) increasing access to and success in post-secondary education, and 3) bringing about community-level change, whether that is defined as the development of a more educated workforce, an improved quality of life, or other forms of economic development. But how do Promise programs bring about these desired outcomes? What needs to happen in order for the financial award and other components of the program to actually affect these goals? A Theory of Change can provide insight into these questions by codifying the expected early, intermediate, and long-term changes sought as part of the Promise effort. 

In Pittsburgh, for example, a simplified theory of change showed the different pathways of the direct impacts of the Promise on students' college going as well as the indirect effects of the place-based nature of the initiative through broader changes to the K-12 and post-secondary ecosystem. Promise program leaders and researchers may find it valuable to use some readily available tools and processes to develop a theory of change specific to the community in which it is being implemented.  

Understanding Program Implementation Timeline 

Which outcomes can be measured and when will depend on how the Promise program is intended to work and how any program eligibility criteria are enacted. Developing a Theory of Change, as described in a prior section, is an important first step in thinking about what needs to happen in order for a particular Promise program to realize its intended outcomes. Since Promise programs represent long-term efforts at systemic change, careful consideration of when various targeted outcomes are even possible to detect is suggested. To do that, one must clearly understand when the key components of the Promise intervention are fully implemented or have had sufficient time to be experienced by the students. 

The Pittsburgh Promise, for example, made scholarship dollars available beginning with the high school graduating class of 2008. This Promise program has GPA, attendance, and residency requirements that were phased in over three cohorts of students (a programmatic choice to give students graduating soon after announcement more of an opportunity to access the scholarship). In addition, it took time for relevant supports to be developed and implemented by the Pittsburgh Promise, the Pittsburgh Public Schools, families, and community partners.  

Other programs with simpler requirements, such as the Kalamazoo Promise, made the full amount of benefits available to the first eligible graduating class (Class of 2006), but students who were seniors when the program was announced in November 2005 had very little time to adapt their high-school experience in light of the new financial aid resources now available to them. Successive graduating classes had more years of exposure to the Kalamazoo Promise, so one would expect to see greater impacts in both the K-12 setting and post-secondary outcomes. Not until 2018 did students who spent their entire K-12 years knowing about the availability of the Kalamazoo Promise graduate from high school. This cohort was the first to represent the full "Kalamazoo Promise effect." 

Some communities may announce the creation of a Promise program to take effect several years down the road, giving students time to adjust their expectations in light of future benefits. The roll-out of benefits in these cases may lead to different timing in terms of when results might be captured. 

Documenting programmatic interventions 

Upon announcement of a Promise program a range of actors and stakeholders may respond in many ways, such as by implementing new programs, changing resource allocation or intensity, forming new partnerships, or shifting focus and attention to different issues. It is important to document these ecosystem shifts in order to later understand outcomes data. 

Key areas to attend to: 

  • Availability of high school to college bridge programs/school year transition programs/senior year transition courses 

  • Development of early assessment and intervention programs  

  • Programming aimed at "college knowledge," including college visits or summer outreach programs 

  • Development of programming around career interests and links between careers and educational pathways (internships, partnerships with employers) 

  • College readiness programs 

  • Embedded college and career counseling 

  • College assessment (SAT/ACT) programs-test preparation, financial aid for fees, increased access through "SAT Days" 

  • FAFSA completion and support system 

  • College application process supports 

Documentation of the nature of the programming or resource shifts, including when the change was made and for whom, could increase the power of outcomes data that are obtained at a later date. 

Identifying Appropriate Indicators 

Potential indicators are organized into three broad outcome areas– the K-12 system, post-secondary outcomes, and community development/economic revitalization -- to align with the goals of most current Promise programs. We offer a comprehensive list of potential indicators in each of the three outcome areas, along with a rationale for the indicator, and some possible data sources. Some of these indicators are strongly predictive of future success in a post-secondary setting and thus are important for school districts to track, while others are tightly tied to and would be directly influenced by the implementation of a Promise program (and some indicators do both). Of course, whether, how and where these data might exist and in what form will vary across systems, and it will take some exploration to determine the specific data sources that might yield the indicators described. 

A note on collecting baseline data. The task of measuring the impact of a Promise program, especially on post-secondary access and attainment, will be easier if researchers capture baseline data before a Promise program is announced or implemented. Useful information includes the college-going patterns of a district’s graduates (available through the National Student Clearinghouse); student attitudes, expectations, and aspirations regarding plans after high-school graduation; and “college knowledge” – i.e., awareness of college costs, the application process, and so on. These can be obtained through surveys of high-school students. 

In addition to the indicators listed here, Promise programs will also want to collect some basic information about scholarship usage. These indicators could include annual metrics for: 

  • Rate of student eligibility (if eligibility criteria exist) 

  • Rate of student scholarship use 

  • Post-secondary institutions attended by scholarship recipients 

  • Amount of money spent by scholarship program 

  • Additional scholarship dollars accessed by Promise recipients 

  • Academic performance of Promise scholars in post-secondary institutions 

  • Post-secondary retention, progression, and completion 

  • Degrees or credentials received by Promise recipients 

 Reviewing Example Data Dashboards 

“A dashboard is a visual display of the most important information needed to achieve one or more objectives, consolidated and arranged on a single screen so the information can be monitored at a glance.” (Few 2013) 

Promise programs can use a dashboard approach to provide various audiences with clear and concise information about the status or impact of the program. The Pittsburgh Promise has developed an impact dashboard that presents metrics on fundraising, scholarship totals and amounts, Scholar college graduates, and impact on high school graduation rate, post-secondary education enrollment, and college retention rates. 

Dashboard expert Stephen Few offers these key principles for developing dashboards: 

  • Dashboards are visual displays that usually employ a mix of text and graphics. 

  • They display the information needed to achieve specific objectives, therefore it is important to be clear about those objectives. 

  • They fit onto a single computer screen so that they are easily used for attaining just that needed information. 

  • Dashboard users can monitor the issue from a single glance at the screen. 

  • Dashboards present information using small, concise, direct, and clear display media. 

  • In order to be effective, dashboards should be customized to the function, audience, and content. 

Planning for When to Measure 

The Theory of Change allows program stakeholders to clearly articulate and understand how the program is supposed to bring about the targeted outcomes, the implementation timeline makes clear which program recipients experienced the entirety of the intended intervention, and the indicators framework offers some possible metrics for ongoing monitoring to gauge progress. As Promise programs embark on the measurement adventure, it is critical to understand (and communicate to stakeholders) exactly when various outcomes are likely to be observed. 

For scholarly and policy research on the impact of Promise programs, see our Promise Programs Research Bibliography. 

For a practical summary of Promise program research aimed at policymakers and practitioners, please review The Free College Handbook. 

This page was revised and updated in November 2023.