You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.


Mel Mark, professor of psychology at the Pennsylvania State University and president-elect of the American Evaluation Association, discusses why theory is important to evaluation practice.

Hey, this issue of The Evaluation Exchange focuses on methods, including recent methodological developments. What’s this piece on evaluation theory doing here? Was there some kind of mix-up?

No, it’s not a mistake. Although evaluation theory1 serves several purposes, perhaps it functions most importantly as a guide to practice. Learning the latest methodological advance—whether it’s some new statistical adjustment for selection bias or the most recent technique to facilitate stakeholder dialogue—without knowing the relevant theory is a bit like learning what to do without knowing why or when.

What you risk is the equivalent of becoming really skilled at tuning your car’s engine without thinking about whether your transportation needs involve going across town, overseas, or to the top of a skyscraper. Will Shadish, Tom Cook, and Laura Leviton make the same point using a military metaphor: “Evaluation theories are like military strategies and tactics; methods are like military weapons and logistics,” they say. “The good commander needs to know strategy and tactics to deploy weapons properly or to organize logistics in different situations. The good evaluator needs theories for the same reasons in choosing and employing methods.”2

The reasons to learn about evaluation theory go beyond the strategy/tactic or why/how distinction, however. Evaluation theory does more than help us make good judgments about what kind of methods to use, under what circumstances, and toward what forms of evaluation influence.

First, evaluation theories are a way of consolidating lessons learned, that is, of synthesizing prior experience. Carol Weiss’ work can help evaluators develop a more sophisticated and nuanced understanding of the way organizations make decisions and may be influenced by evaluation findings.3 Theories enable us to learn from the experience of others (as the saying goes, we don’t live long enough to learn everything from our own mistakes). George Madaus, Michael Scriven, and Daniel Stufflebeam had this function of evaluation theory in mind when they said that evaluators who are unknowledgeable about theory are “doomed to repeat past mistakes and, equally debilitating, will fail to sustain and build on past successes.”4

Second, comparing evaluation theories is a useful way of identifying and better understanding the key areas of debate within the field. Comparative study of evaluation theory likewise helps crystallize what the unsettled issues are for practice. When we read the classic exchange between Michael Patton and Carol Weiss,5 for example, we learn about very different perspectives on what evaluation use can or should look like.

A third reason for studying evaluation theory is that theory should be an important part of our identities as evaluators, both individually and collectively. If we think of ourselves in terms of our methodological skills, what is it that differentiates us from many other people with equal (or even superior) methodological expertise? Evaluation theory. Evaluation theory, as Will Shadish said in his presidential address to the American Evaluation Association, is “who we are.”6 But people come to evaluation through quite varied pathways, many of which don’t involve explicit training in evaluation. That there are myriad pathways into evaluation is, of course, a source of great strength for the field, bringing diversity of skills, opinions, knowledge sets, and so on.

Despite the positive consequences of the various ways that people enter the field, this diversity also reinforces the importance of studying evaluation theories. Methods are important, but, again, they need to be chosen in the service of some larger end. Theory helps us figure out where an evaluation should be going and why—and, not trivially, what it is to be an evaluator.

Of course, knowing something about evaluation theory doesn’t mean that choices about methods can be made automatically. Indeed, lauded evaluation theorist Ernie House notes that while theorists typically have a high profile, “practitioners lament that [theorists’] ideas are far too theoretical, too impractical. Practitioners have to do the project work tomorrow, not jawbone fruitlessly forever.”7 Especially for newer theoretical work, the translation into practice may not be clear—and sometimes not even feasible. Even evaluation theory that has withstood the test of time doesn’t automatically translate into some cookbook or paint-by-number approach to evaluation practice.

More knowledge about evaluation theory can, especially at first, actually make methods choices harder. Why? Because many evaluation theories take quite different stances about what kind of uses evaluation should focus on, and about how evaluation should be done to achieve those uses. For example, to think about Donald Campbell8 as an evaluation theorist is to highlight (a) the possibility of major choice points in the road, such as decisions about whether or not to implement some new program; (b) the way decisions about such things often depend largely on the program’s potential effects (e.g., does universal pre-K lead to better school readiness and other desirable outcomes?); and (c) the benefits of either randomized experiments or the best-available quasi-experimental data for assessing program effects.

In contrast, when we think about Joseph Wholey9 as an evaluation theorist, we focus on a very different way that evaluation can contribute: through developing performance-measurement systems that program administrators can use to improve their ongoing decision making. These performance measurement systems can help managers identify problem areas and also provide them with good-enough feedback about the apparent consequences of decisions.

Choosing among these and the many other perspectives available in evaluation theories may seem daunting, especially at first. But it’s better to learn to face the choices than to have them made implicitly by some accident of one’s methodological training. In addition, theories themselves can help in the choosing. Some evaluation theories have huge “default options.” These theories may not exactly say “one size fits all,” but they certainly suggest that one size fits darn near all. Indeed, one of the dangers for those starting to learn about evaluation theory is becoming a true believer in one of the first theories they encounter. When this happens, the new disciple may act like his or her preferred theory fits all circumstances. Perhaps the most effective antidote to this problem is to be sure to learn about several evaluation theories that take fairly different stances. Metaphorically, we probably need to be multilingual: No single evaluation theory should be “spoken” in all the varying contexts we will encounter.

However, most, if not all, evaluation theories are contingent; that is, they prescribe (or at least are open to) quite different approaches under different circumstances. As it turns out, there even exist theories that suggest very different bases for contingent decision making. Put differently, there are theories that differ significantly on reasons for deciding to use one evaluation design and not another.

These lead us to think about different “drivers” of contingent decision making. For example, Michael Patton’s well-known Utilization-Focused Evaluation tells us to be contingent based on intended use by intended users. Almost any method may be appropriate, if it is likely to help intended users make the intended use.10 Alternatively, in a recent book, Huey-Tsyh Chen joins others who suggest that the choices made in evaluation should be driven by program stage.11 Evaluation purposes and methods for a new program, according to Chen, would typically be different from those for a mature program. Gary Henry, George Julnes, and I12 have suggested that choices among alternative evaluation purposes and methods should be driven by a kind of analytic assessment of each one’s likely contribution to social betterment.13

It can help to be familiar with any one of these fundamentally contingent evaluation theories. And, as is true of evaluation theories in general, one or another may fit better, depending on the specific context. Nevertheless, the ideal would probably be to be multilingual even in terms of these contingent evaluation theories. For instance, sometimes intended use may be the perfect driver of contingent decision making, but in other cases decision making may be so distributed across multiple parties that it isn’t feasible to identify specific intended users: Even the contingencies are contingent. Evaluation theories are an aid to thoughtful judgment—not a dispensation from it. But as an aid to thoughtful choices about methods, evaluation theories are indispensable.

1 Although meaningful distinctions could perhaps be made, here I am treating evaluation theory as equivalent to evaluation model and to the way the term evaluation approach is sometimes used.
2 Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Newbury Park, CA: Sage.
3 For a recent overview, see Weiss, C. H. (2004). Rooting for evaluation: A Cliff Notes version of my work. In M. C. Alkin (Ed.), Evaluation roots: Tracing theorists’ views and influences (pp. 12–65). Thousand Oaks, CA: Sage.
4 Madaus, G. F., Scriven, M., & Stufflebeam, D. L. (1983). Evaluation models. Boston: Kluwer-Nijhoff.
5 See the papers by Weiss and Patton in Vol. 9, No. 1 (1988) of Evaluation Practice, reprinted in M. Alkin (Ed.). (1990). Debates on evaluation. Newbury Park, CA: Sage.
6 Shadish, W. (1998). Presidential address: Evaluation theory is who we are. American Journal of Evaluation, 19(1), 1–19.
7 House, E. R. (2003). Evaluation theory. In T. Kellaghan & D. L. Stufflebeam (Eds.), International handbook of educational evaluation (pp. 9–14). Boston: Kluwer Academic.
8 For an overview, see the chapter on Campbell in the book cited in footnote 2.
9 See, e.g., Wholey, J. S. (2003). Improving performance and accountability: Responding to emerging management challenges. In S. I. Donaldson & M. Scriven (Eds.), Evaluating social programs and problems: Visions for the new millennium. Mahwah, NJ: Lawrence Erlbaum.
10 Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage
11 Chen, H.-T. (2005). Practical program evaluation: Assessing and improve planning, implementation, and effectiveness. Thousand Oaks, CA: Sage.
12 Mark, M. M., Henry, G. T., & Julnes, G. (2000). Evaluation: An integrated framework for understanding, guiding, and improving policies and programs. San Francisco, CA: Jossey-Bass.
13 Each of these three theories also addresses other factors that help drive contingent thinking about evaluation. In addition, at least in certain places, there is more overlap among these models than this brief summary suggests. Nevertheless, they do in general focus on different drivers of contingent decision making, as noted.

Mel Mark, Ph.D.
Professor of Psychology
The Pennsylvania State University
Department of Psychology
407 Moore Hall
University Park, PA 16802.
Tel: 814-863-1755
Email: m5m@psu.edu

‹ Previous Article | Table of Contents | Next Article ›

© 2014 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project