Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume XI, Number 2, Summer 2005
Issue Topic: Evaluation Methodology
Ask the Expert
Robert Boruch, a founder of the Campbell Collaboration and professor of education and statistics at the University of Pennsylvania, discusses how the Campbell Collaboration and randomized trials contribute to evidence-based policy.
The Campbell Collaboration (C2) is a nonprofit organization that aims to help people make well-informed decisions about the effects of interventions in the social, behavioral, and educational arenas. Using systematic reviews of studies of interventions (programs, practices, and policies), C2 helps policymakers, practitioners, researchers, and the public identify what works.
Systematic reviews synthesize available high quality evidence on interventions. After a thorough search of the literature to screen available studies for quality, reviewers identify the least equivocal evidence available on an intervention, describe what the evidence says about the intervention's effectiveness, and explore how that effectiveness is influenced by variations in process, implementation, intervention components, participants, and other factors.
A main justification for C2's effort is the volume of studies purporting to show that certain interventions “work.” Often, multiple studies of the same intervention exist, many of which are based on anecdotal or other fragmentary evidence. Reports from such studies can influence decisions about whether to adopt an intervention. Frequently, interventions do not actually have the effects they are purported to have, so decisions to adopt programs may be made based on faulty or incorrect evidence.
C2 reviewers¹ conduct systematic reviews after completing a lengthy protocol that requires that they specify and outline the kinds of outcome variables they will examine, as well as permissible research designs. Virtually all reviews start with the assumption that reviewing randomized trials is the priority: Well-designed randomized trials that are carried out well provide a guarantee that the intervention group (e.g., individuals, schools, classrooms) is similar to the group not exposed to the intervention or exposed to a different one. When resources are available, the C2 reviews also explore high quality nonrandomized trials, but randomized trials are the focus.
Boruch, R. F. (Ed.). (2005). Place randomized trials: Experimental tests of public policy: Vol. 599. The annals of the American Academy of Political and Social Science. Thousand Oaks, CA: Sage.
The Campbell Collaborative's online library offers two databases. C2 Social, Psychological, Education, and Criminological Trials Registry (C2-SPECTR) contains over 11,700 entries on randomized and possibly randomized trials. C2 Reviews of Interventions and Policy Evaluations (C2-RIPE) contains information on systematic reviews, including titles, protocols, abstracts, and refereed critiques. www.campbellcollaboration.org
The Cochrane Collaboration, a sibling organization of the Campbell Collaboration, develops systematic reviews in the health care arena. www.cochrane.org
The What Works Clearinghouse offers searchable databases and reports that provide ongoing, high quality reviews of the effectiveness of replicable educational interventions to improve student outcomes. www.whatworks.ed.gov
An example of a C2 systematic review is one completed on so-called “scared straight programs,” in which kids at risk of committing a crime—and sometimes even those who are not at risk—hear from convicted felons who try to deter them from delinquent acts or crimes. These programs have received a lot of press, are popular with many parents, and they seem to enjoy some approval among policymakers. The systematic review uncovered 200–300 articles from studies of these programs. Only a small fraction turned out to be fair tests of the program. The approximately eight randomized trials identified in the review showed, contrary to popular belief, that these programs actually enhanced the likelihood that kids would engage in delinquent behaviors or crimes.²
In many governments—not only here in the U.S. but elsewhere as well—the push toward more policy based on less equivocal evidence about the effectiveness of interventions or policies is strong. Often, producing such evidence requires a randomized trial. But does evidence-based policy require the use of randomized trials?
A relatively simple way of outlining the conditions under which evaluators should consider whether a randomized trial is necessary was produced in 1981 by the Federal Judicial Center, the research arm of the U.S. Superior Courts.³ The Center was interested in whether it was appropriate to do randomized trials in a prison or judicial context. The Center identified five criteria for determining whether a randomized trial is appropriate: (a) the issue must be considered a serious social problem, (b) the solution should be unknown or debatable, (c) no methods other than randomized trials will yield equally defensible evidence, (d) the results should be used in the public sector, and (e) evaluators must be able to protect individual rights involved in the study. If all of these conditions are met, it is sensible to consider doing a randomized trial. If one or more is not met, a randomized trial is probably not appropriate.
Abby R. Weiss, Project Manager, HFRP
¹ Reviewers can come from anywhere in the world and must find their own funding to generate the review.
² Petrosino, A., Turpin-Petrosino, C., & Buehler, J. (2002). “Scared Straight” and other juvenile awareness programs for preventing juvenile delinquency. The Cochrane Database of Systematic Reviews 2002, Issue 2, Art. No.: CD002796.
³ Federal Judicial Center. (1981). Social experimentation and the law. Washington, DC: Author.