Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume XI, Number 2, Summer 2005
Issue Topic: Evaluation Methodology
Questions & Answers
Gary Henry is a professor in the Andrew Young School of Policy Studies and Department of Political Science at Georgia State University. He has evaluated numerous policies and programs, including the Georgia Pre-K and HOPE Scholarship programs, and has published extensively on evaluation methodology and policy analysis, most recently focusing on the effects of education policies, public information campaigns, and, as discussed here, evaluation influence. Henry has been director of evaluation at the David and Lucile Packard Foundation, deputy secretary of education for the Commonwealth of Virginia, and chief methodologist with the Joint Legislative Audit and Review Commission for the Virginia General Assembly. He received the American Evaluation Association Award for Outstanding Evaluation in 1998 and is a former co-editor-in-chief of New Directions for Evaluation.
Evaluators are plagued by the notion that much of our work goes unused. What is your take on how we should think about evaluation use?
Evaluation use has always been important to our field. Most of us consider it an important criterion in judging the merit of our work. We lack clarity, however, in what it means for our work to be used, and consequently the term use over time has been stripped of meaning.
Some, for example, define use as direct actions taken as a result of evaluations (also known as instrumental use). Several well-known studies have assessed such “end-state” definitions of use by asking decision makers if they would attribute their actions to evaluations read prior to taking those actions. This research has led to a pervasive notion in the field that evaluations generally go unused. While this definition is one way to think about use, it is too narrow if it is the primary way we think about it. It leaves out other things that evaluation can set in motion.
To take a classic example, over time the 1960s High/Scope evaluation in Ypsilanti, Michigan, has completely changed the way we think about early childhood education. It has become the foundation for the huge movement toward universal preschool that we see now in the United States. We would not acknowledge this kind of use if we just looked at whether Michigan policymakers picked up the evaluation's results in the year or two after they were released. We would miss this evaluation's extremely powerful upstream use.
Thinking about use as the end goal of evaluation limits us from seeing the various types of influence evaluation can have. We need a paradigm shift in how we think about and research this topic. This shift requires that we think about evaluation as an intervention with its own set of processes, outputs, and outcomes that we are aware of and accountable for.
How does thinking about evaluation as an intervention affect how we think about evaluation use?
When thought of as an intervention with its own set of outcomes, it is easier to see that evaluation has the potential to be used in multiple ways. To illustrate this, we can think about mapping the intervention of evaluation like we map other interventions or programs using logic models or theories of change.
First, rather than seeing use as the end goal of evaluation, the end goal should be social betterment. Ultimately, we should be concerned with an evaluation's influence on the beneficiaries of a program or policy, and look at whether people are better off as a result of the evaluation.
Backing up from this end goal, we can identify the various evaluation outcomes that will lead to social betterment. Rather than view these outcomes as types of evaluation use, however, we should think about them as types of influence, or as direct and indirect changes that are triggered by evaluation. By changing the term from use to influence, it becomes easier to think about evaluation as an intervention and to seek the broader ways in which evaluations, or the evaluation process itself, influences social betterment in the long term.
Evaluation outcomes linked to social betterment can be categorized in three ways: individual, interpersonal, and collective. Individual influence is when evaluation changes something within the individual, such as one's thoughts, attitudes, beliefs, or actions. Interpersonal influence refers to changes triggered by interactions between individuals, such as when an evaluation's findings are used to persuade others about the merit of a program or policy. Collective influence means changes in the decisions or practices of organizations or systems, such as when policy change happens as a result of an evaluation, or when a program is expanded, continued, or terminated.
To continue the logic model analogy, we can back up even farther and map the various evaluation activities that can lead to these outcomes. For example, how we select stakeholders, design evaluations, collect and analyze data, generate findings, and disseminate results affects the types of influence our evaluations have.
Reflecting on the model as a whole, we can use it to identify the various pathways through which our evaluations can have influence. For example, in my work on universal preschool a desired social betterment outcome is that kids develop more skills faster as a result of quality preschool programs. To get that outcome, collective action is needed in that public policies have to be in place to support such programs. But other forms of influence also may be needed before support for those policies is generated.
For example, individual influence is important because decision makers must first find the issue compelling and be persuaded about the benefits of universal preschool. Next, decision makers (or other groups) need to become change agents and interpersonally persuade others that preschool is in the best interests of children in their state. From there, the issue has to make it on the policy agenda and be considered by the legislature, administration, and voting public before we eventually get policy reform. If we look only at evaluation in terms of its influence on collective action, however, we miss the other steps and types of influence needed to get there.
Henry, G. T., & Mark, M. M. (2003). Beyond use: Understanding evaluation's influence on attitudes and actions. American Journal of Evaluation, 24(3), 293–314.
Henry, G. T. (2003). Influential evaluations. American Journal of Evaluation, 24(4), 515–524.
Henry, G. T. (2000). Why not use? New Directions for Evaluation, 88, 85–98.
Mark, M. M., & Henry, G. T. (2004). The mechanisms and outcomes of evaluation influence. Evaluation, 10(1), 35–57.
How does thinking about influence change what we do as evaluators?
First, thinking about evaluation influence leads to greater clarity on the purposes of our evaluations. Thinking about influence should encourage us to ask questions about how evaluation sponsors see the evaluation having influence. Is it to inform policy change, and if so, what kind of policy change? Should it inform programmatic change, and, if so, what aspects? As we design evaluations, we should ask these questions and develop at least a mental map of the pathways through which the evaluation can have influence. The evaluation process will be affected by the kinds of influence pursued.
Second, we need to think more about communication strategies. It is important to identify the audiences we need to reach to ensure our work has influence, and to be deliberate actors in achieving that influence. For example, perhaps the issue we're evaluating isn't appearing on the public's or policymakers' radar. We have to consider who we need to reach to raise its visibility, and how our work can affect the issue's salience for individuals in that audience. Or we may need to inspire audiences like the media, advocacy groups, or citizens to make interpersonal use of the evaluation findings. These considerations should be included in evaluation planning, and we should give ourselves enough time to actually follow through on our plans once we make them and have the evaluation findings available.
Third, transparency is important. Underlying the concept of influence is the notion that people have to know evaluation findings in order to use them. An evaluation kept under wraps directly conflicts with this notion. For evaluations completed with both public and private funds, we need to take a careful look at transparency and encourage broad exposure and influence with many audiences.
If evaluators embrace the notion of evaluation as an intervention and are accountable for the outcomes of our work, many types of evaluation influence are possible. Ultimately this can change the way we think about our work and help shed the perception that much of our work goes unused.
Julia Coffman, Consultant, HFRP