Evaluating an educational program can mean anything from distributing a “smile sheet” (“How much did you like our program?”) to launching a multi-year, mixed-methods study on the effectiveness of your intervention. The former won’t tell you much of value, and the latter is well out of budget for most initiatives. Good program evaluation takes some time and resources, but doesn’t necessarily need to resemble a full-scale research project. Even if your evaluation is a modest one, following some basic rules can help produce data that contribute meaningfully to improving the program.
Rule #1: The first rule of evaluation is that it should be part of the original program setup. It’s not uncommon that a program is developed and then, months later, somebody says, “Gee, we ought to evaluate it!” By that time, it’s too late to collect any pre-program data, meaning that measuring participants’ change over the course of the program may be impossible. As you develop your program, be thinking about the program objectives you expect to achieve (see Rule #2), and how you will know at the end of the program whether you have achieved them. Another reason to plan from the start is that you’ll want to include money in the program budget for evaluation – the staff who will perform it, any materials needed, incentives for participants, and so on.
Rule #2: Evaluation should be linked to program goals and objectives. The words goal and objective are often used interchangeably, but in evaluation-speak, goals are broader aims of a program, while objectives are more concrete, measurable hoped-for outcomes. So, for instance, a math-education program might have as its goal to increase student interest and improve performance in math; the more specific objectives might be to increase participants’ confidence in grade-level math, to enhance understanding of particular concepts, to improve scores on final exams, and so forth. For each objective, there should be a measurable question to be addressed in the evaluation (see Rule #3). You may want to use a logic model to help ensure that your evaluation plan aligns well with your program plan.
Rule #3: Measures should be designed to answer questions that align to program objectives. The “smile sheet” approach is OK as a small part of a larger evaluation, but it’s not going to tell you whether you have met all of the main goals of a program. Once you’ve articulated goals and objectives, you can state key evaluation questions. Staying with the math-education example, for example, a key question might be “To what extent did participants’ math confidence levels change over the program period?” When possible and appropriate, make use of existing, validated measures to help answer key evaluation questions.
Rule #4: Feed evaluation results back into the program. Too often, evaluation gets done more to check it off the to-do list than to actually make improvements in the program. Take the time to analyze evaluation data, and consider what the results suggest about where and how the program is, or is not, meeting its objectives and goals. Use the data to support recommendations for program changes where appropriate. And perhaps the most important advice of all: Resist the temptation to see only the data that support initial program expectations and to ignore findings that point to the contrary.
Marina Micari is an associate director at the Searle Center for Advancing Learning and Teaching.