In previous posts, I've mentioned that the further outcomes are in time, the harder it is to collect that outcome data. For example, if your program provides tutoring services to middle schoolers that are intended to increase their college attendance rates, it will be very challenging -- and take a long time -- to collect your own long-term outcome data.
There can be multiple ethical issues with following participants over time. In this example, administrative records might make it possible to know what happened to every one of your students over time. However, laws that protect the privacy of students will often make that approach impossible. Following up with your former students by survey would be expensive, and most programs find that it's very hard to keep in touch with participants over several years. Finally, the survey approach would lead to biased data -- the students who loved your program and believe that it changed their lives for the better will be the most likely to complete your survey year after year.
One powerful solution is to use other programs' outcomes to support your logic model. If other programs have demonstrated a causal link between your program's short term outcomes and your intended long-term outcomes, you can -- and should -- cite that as evidence that your program will lead to the long-term outcomes you're working towards. For example, if students who are proficient in math in the 8th grade have been demonstrated to attend college at higher rates, you can simply refer to this documentation to support the importance of helping your middle school students succeed in math and your expectation that increasing their math proficiency now will increase their college attendance five years from now.
This approach can also be used to link your outputs to your outcomes. For example, there is plenty of documentation that tutoring and after-school programs that actively collaborate with students schools have better outcomes. You would want to emphasize that in your program design and consistently monitor that output. Ensuring that your program is designed with known best-practices in mind and consistently monitoring your program's performance on these quality measures makes it much more likely that any of your intended outcomes will occur.
You can find information like this by looking at published evaluation studies. Many research and evaluation firms publish their evaluation findings and best practices reports online and many non-profit and government agencies make their own evaluations available. US Government Agencies publish a huge variety of research and evaluation studies.
Using prior research and evaluations is important both for program design and for program evaluation. There are some best practices to keep in mind when using this approach. First, be sure that you are using the data fairly. It's tempting to look only for studies that support your logic model, but do make sure that you are also looking for materials that do not support your model and applying learning from those materials as well. They may help you avoid pitfalls and design the best possible program. Second, be careful in choosing studies that reflect as closely as possible the design of your program, the population you serve, and the outcomes that you are looking for. You may want to combine several pieces of research to support varying elements of your program design.
Finally, use both research and evaluation to support your logic model. Research is usually designed to show larger patterns and test theories, while evaluation focuses on particular programs or interventions. Both are useful in developing your program.