Now that you have developed a logic model (check out this post if you still need some help, or download my logic model guide), you might be wondering how to integrate your data collection plans with it. If your logic model is clear, using it to build an evaluation plan will be pretty straightforward.
Your logic model might look something like the example below: a grid that connects activities with outputs and outcomes. If you’re not sure what the difference between outputs and outcomes is, you may want to refer to the Kellogg Foundation's excellent and comprehensive guidebook on developing logic models. The logic model below is based on a logic model template developed by the Milwaukee Public Schools Research and Development Department.
A Logic Model Example
Let’s use a hypothetical tutoring program as an example. Our tutoring program is designed to help students in our community who are at risk of not graduating from high school. Our research tells us that failing to test proficient in a core subject is a significant indicator of failing to graduate from high school, so we will identify students who are not testing proficient on at least one subject and provide them with one-on-one tutoring.
I’ve used short-term, medium-term and long-term outcomes here; there are a lot of different ways to think about outcomes and impacts, and this is just one approach. Here is our hypothetical logic model:
Adding the Measures
There are two major questions that we can use data to assess. One, are we doing what we said we would do? And two, is it working? Measures that address the first question are performance measures. Measures that address the second are outcome measures or indicators. I have included some examples here. The list is not exhaustive.
Why Two Kinds of Measures
Here, I am calling the measures of what we do performance measures and measures of the effect we had outcome measures. Some people call them all performance measures (I often do that). Some people call measures of our activities output measures and measures of the impact we had outcome measures. All those terminologies are fine. But it is important to look at both kinds. Process measures help you understand if you are implementing the program the way it was designed. Outcome measures tell you whether that program leads to the change you want to create in the world. You’ll want to monitor process measures frequently and outcome measures less frequently (say annually or semi-annually for most programs). Monitoring both will help you understand both whether the model you’re implementing is the right one and also whether you are implementing it in a high-quality way.
You will probably find that a lot of your program's performance measures are spelled out in your grant application. If you articulated how many participants you would serve and what services they would receive those go into your performance measures.
Your outcome measures are proxies for (things you can measure that reflect) the change you want to see in your participants' lives. They need to be clear enough that you can count them easily. For example, don't say "graduate from high school". What if they return as adults and graduate? How long will you track them? Use a clearer measure like "graduate from high school within 5 years." That's a reasonable amount of time to track students, and reflects a positive outcome of our tutoring program. Some of these measures are probably in use elsewhere and may be somewhat standardized. You don't always have to make up your own.
As you move to the right side of the logic model, the outcomes get harder to measure for multiple reasons. There are more externalities to consider. For example, graduating from high school can be influenced by a lot of factors that our tutoring program does not address. It will take a long time for the outcome to happen and the data will be harder to get because you’ll have to follow up with participants long after they have left the program or use some other data-source. It’s OK that these things are hard to measure. You can probably find some evidence from other programs that if you’re implementing your program well and getting the right short-term results, then the long-term results will follow. This is where a good research base is helpful.