Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Four experts in the out-of-school time field discuss their experiences using evaluation for program improvement.
Christopher Wimer from HFRP describes three promising methodological approaches to studying program quality in the OST arena.
An introduction to the issue on Evaluating Out-of-School Time Program Quality by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
Claudia Weisburd and Rhe McLaughlin of Foundations, Inc., describe their Quality Assurance System for program improvement.
Sandra Simpkins Chaput from HFRP summarizes recent developmental research examining dimensions of participation in out-of-school activities.
This Snapshot outlines the academic, youth development, and prevention performance measures currently being used by out-of-school time programs to assess their progress, and the corresponding data sources for these measures.
Free. Available online only.
Six experts share their thoughts on how the evaluation field has changed in the past decade and consider what may be in store for the future.
Michael Scriven, author of Evaluation Thesaurus, talks about how evaluation has evolved into a discipline distinct from social science research.
David Chavis outlines the "best of the worst" evaluator practices that impede building good relationships with evaluation consumers.
Molly Engle and James Altschuld reveal some recent trends in university-based evaluation training.
J. Curtis Jones from the Partnership for Whole School Change in Boston describes a performing arts intervention that integrates program concepts into its evaluation.
An introduction to the issue on Reflecting on the Past and Future of Evaluation by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
Craig Russon of the W. K. Kellogg Foundation describes efforts to connect evaluation organizations around the world to form an international community.
This tenth-year-anniversary-issue of The Evaluation Exchange features reflections on some of the trends (both good and bad) that have occurred in the evaluation field over the past decade. Authors consider the “best of the worst”evaluator practices, changes in university-based evaluation training, and the development of evaluation as a discipline. In recognition of the need to look ahead, other articles introduce themes we will address in greater depth in the future, such as international evaluation, technology, evaluation of the arts, and diversity.
Charles McClintock, Dean of the Fielding Graduate Institute's School of Human and Organization Development, shows how narrative methods can aid program evaluation and organization development.
Ricardo Millett from the Woods Fund of Chicago discusses how evaluators can build capacity by addressing issues of diversity and multiculturalism.
Geneva Haertel and Barbara Means of SRI International suggest ways evaluators and policymakers can work together to produce “usable knowledge” of technology’s effects on learning.
Tezeta Tulloch from Harvard Family Research Project reviews Robert Brinkerhoff's, The Success Case Method: Find Out Quickly What's Working and What's Not.
A list of organizations and initiatives related to the issue's theme of Reflecting on the Past and Future of Evaluation.
Josh Kirschenbaum and Victor Rubin from PolicyLink discuss the uses of community mapping.
Many lessons have been learned during the past decade of community building; this issue of The Evaluation Exchange explores many of these lessons and their implications. Articles by experienced and insightful authors discuss a number of critical issues now surfacing in this field, including innovations in community-building evaluation, the role of cultural competency in community-based research and evaluation, and how evaluators and funders can better build on the evaluation and learning approaches that community-based organizations already use to improve their work.
Beth Weitzman and Diana Silver from New York University’s Center for Health and Public Service Research share their experience integrating a comparison group design into a theory of change approach.
Christopher Wimer reflects on the role youth can play in evaluation.
Deborah Johnson illustrates how storytelling can help uncover powerful impacts with two case studies from the Boys and Girls Club.
An introduction to the issue on Evaluating Community-Based Initiatives by HFRP's Founder & Director, Heather B. Weiss, Ed.D.