Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume XV, Number 1, Spring 2010
Issue Topic: Scaling Impact
Questions and Answers
Marshall “Mike” Smith is senior counselor to the secretary as well as director of international affairs at the U.S. Department of Education. His previous positions in government have included chief of staff to the secretary for education and assistant commissioner for policy studies in the Office of Education under the Carter administration and undersecretary and acting deputy secretary for education in the Clinton administration. Dr. Smith has been a professor at Harvard University, the University of Wisconsin–Madison, and Stanford University, where he was also dean of the School of Education. Most recently, he was program director for education at the William and Flora Hewlett Foundation. His distinguished career includes positions on many national commissions and panels, and he has authored numerous publications. Dr. Smith earned his master’s and doctoral degrees from the Harvard Graduate School of Education.
What does the phrase “going to scale” mean?
In the current education environment, going to scale means taking a promising innovation and replicating it in a large number of places. Going to scale at a significant level means spreading an innovation throughout an entire geographic region. In the policy environment, going to scale means taking an idea that seems to work in a particular setting or in multiple settings, codifying it, and then enforcing it through state or federal legislation.
Why did the idea of scale come into the policy conversation in education?
Policymakers were frustrated that promising innovations and good ideas were not spreading. I remember President Clinton asking, “Why don’t ideas travel?” What he meant was, “Why aren’t good ideas replicated in other settings?” Education reforms in this country are like fireflies in a field—the fireflies blink on and off, but they are isolated and uncoordinated, so they do not give off a concentrated or meaningful glow. Also, when things are tried and they fail, we have a tendency to try the same thing over again later. Or we try reforms that work for a while, but we do not support them long term and they go away.
In response to this frustration, there has been a growing sense that we should stop starting from scratch in education reform and build more on what has worked. This is where the idea of scale comes in. If we can learn how to bring good ideas to scale, then we can start to make more progress in reforming education.
How do you know when an intervention is ready to go to scale?
For an intervention to go to scale, it must have external validity—it must have similar effects in a variety of contexts. There are several considerations in determining external validity.
First, the intervention needs a strong theory of change or logic model that identifies the causal drivers necessary to produce the intervention’s outcomes. As part of this model, it is important to identify the contextual variables that can impact, positively or negatively, the success of those causal drivers. For example, an intervention that involves a reduction in school class size will probably not improve student achievement without additional high-quality teachers to support that reduction. A good understanding of the intervention’s causal drivers and the environmental conditions required for them to be successful leads to a good understanding of the contexts in which that intervention can and should be applied.
Second, the intervention needs to demonstrate large effects. If the intervention takes place in an “ideal” context, and the effects are relatively small, when the intervention is moved to another location where conditions are not optimal, the effects will be overcome by context. The effects need to be large enough to prevail in spite of relatively minor contextual conditions. To show large effects, interventions typically must introduce substantial changes in the status quo. For example, studies of school-choice interventions in which students have options to attend different schools (e.g., through vouchers, charters, or magnet schools) often show little improvement in student achievement. This small impact is likely due, at least in part, to the fact that students’ new schools are not different enough from their old schools on important educational dimensions like curricula, teacher qualifications and backgrounds, and school hours.
Third, flexibility must be built into the intervention. Interventions do not operate uniformly across different sites. To effectively go to scale, the intervention must allow for some flexibility and adaptation. The challenge is determining how much adaptation to allow before the intervention is too different from the original model to be confident that it will still show effects. Effective interventions typically balance structure and flexibility. For example, instructional reforms in the Long Beach school district in California are supported by continuous improvement feedback loops that provide information about how and when to adapt the reforms to meet local needs. Similarly, the Knowledge Is Power Program (KIPP) Academies, which provide charter middle schools, use a set of standard operating principles known as the Five Pillars, but let individual principals decide how to implement them in their schools.
Once you have determined an intervention is ready to go to scale, what challenges are involved in bringing it to new settings?
Interventions encounter several challenges when they go to scale. For example, an intervention will face greater resistance in settings where it is seen as “disruptive” to the existing system or when it is replacing an existing intervention. In such cases, more effort may be needed to convince those involved that the new intervention is worth adopting. When done well, these disruptive interventions can cause powerful changes that lead to sizable improvements.
Related to this, the intervention’s salience, duration, and intensity are especially important when replacing an existing intervention. To make the case for the new intervention, it needs to be seen as a significant improvement over the old approach. Therefore, the new intervention needs to show large and stable effects, which generally requires that it be conducted over a long period of time.
Also, bringing in a new intervention often requires relearning effective practices developed under the old intervention. For example, teachers become more comfortable with and better at teaching a curriculum as time goes on. Without even thinking about it, they do continuous improvement—they build on what they did and learned the previous year. With a new curriculum, they have to relearn the whole system and develop a new set of habits.
Finally, as I said earlier, implementation may look and behave quite differently from one place to another. If the model is sufficiently flexible, this variation should not be problematic. However, these contextual differences may mean that it takes more time and effort to see expected results.
Related ResourceSmith, M. S., & Smith, M. L. (2009). Research in the policy process. In G. Sykes, B. Schneider, & D. N. Plank (Eds.), Handbook of education policy research (pp. 372–397). New York: Routledge Publishers (for the American Educational Research Association).
This article explores the relationship between education research and policy. In response to the impression that many education policies and interventions have little impact on education outcomes, the authors examine ways to improve the quality and usefulness of education research. The article concludes with recommendations for policymakers and policy researchers, as well as suggestions for innovations to help produce major improvements in student achievement outcomes.
What evaluation strategies should accompany the process of going to scale?
First, evaluations of the original intervention are needed to ensure that it has internal validity—that the intervention’s causal drivers are working as intended. This process involves collecting implementation data to determine that the program is implemented as planned, as well as outcome data to ensure that the program achieves what it set out to achieve. If you have that, then you know if the intervention does not work in a new setting, it is likely due to contextual differences that affect implementation rather than a flaw in the model itself.
Second, once the original site has been evaluated for internal validity and it goes to scale in new locations, evaluations are needed in the new sites to help determine what tweaks are necessary to account for differences in context. As I mentioned earlier, evaluations at this stage should emphasize ongoing learning and continuous improvement to ensure the scaling process is as successful as it can be.
Heather B. Weiss, Ed.D.
Director, Harvard Family Research Project
Helen Janc Malone
Graduate Research Assistant, Harvard Family Research Project