Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume XI, Number 2, Summer 2005
Issue Topic: Evaluation Methodology
This issue of The Evaluation Exchange periodical focuses on evaluation methodology, covering topics in contemporary evaluation thinking, techniques, and tools. Mel Mark, president-elect of the American Evaluation Association, kicks off the issue with a discussion about the role that evaluation theory plays in our methodological choices. Other voices in the issue include Georgia State University evaluator Gary Henry, who makes the case for a paradigm shift in how we think about evaluation use and influence, and Robert Boruch, a Campbell Collaboration founder, who discusses the role of randomized trials in defining “what works.” Other contributors to the issue respond to various “how to” questions, such as how to foster strategic learning, how to find tools that assess nonprofit organizational capacity, how to select and use various outcome models, how to increase the number of evaluators of color, how to enhance multicultural competency in evaluation, and how to measure what we value so others value what we measure. Finally, the issue explores theory of change, cluster evaluation, and retrospective pretests—methodological approaches currently generating much interest and dialogue.
An introduction to the issue on Evaluation Methodology by HFRP's Founder & Director, Heather B. Weiss, Ed.D.Theory & Practice
Mel Mark, professor of psychology at the Pennsylvania State University and president-elect of the American Evaluation Association, discusses why theory is important to evaluation practice.Promising Practices
Robert Penna and William Phillips from the Rensselaerville Institute’s Center for Outcomes describe eight models for applying outcome-based thinking.Promising Practices
John Bare of the Arthur M. Blank Family Foundation explains how nonprofits can learn about setting evaluation priorities based on storytelling and “sacred bundles.”Promising Practices
Abby Weiss from HFRP describes the tool that the Marguerite Casey Foundation offers its nonprofit grantees to help them assess their organizational capacity.Ask the Expert
John A. Healy, Director of Strategic Learning and Evaluation at The Atlantic Philanthropies, shares ways to position learning as an organizational priority.Ask the Expert
Robert Boruch, a founder of the Campbell Collaboration and professor of education and statistics at the University of Pennsylvania, discusses how the Campbell Collaboration and randomized trials contribute to evidence-based policy.Ask the Expert
Andrea Anderson is a research associate at the Aspen Institute Roundtable on Community Change, where she focuses on work related to planning and evaluating community initiatives.Questions & Answers
Gary Henry makes the case for a paradigm shift in how we think about evaluation use and influence.Evaluations to Watch
Patricia Rogers of the Royal Melbourne Institute of Technology describes how a theory of change can provide coherence in evaluating national initiatives that are both complicated and complex.Evaluations to Watch
The John S. and James L. Knight Foundation and Wellsys Corporation describe how they plan to aggregate lessons learned across a "thematic cluster" of youth development investments.Beyond Basic Training
Teresa Boyd Cowles of the Connecticut Department of Education offers self-reflective strategies evaluators can use to enhance their multicultural competency.Beyond Basic Training
Mehmet Öztürk discusses findings from a review of evaluations of programs at selective colleges and universities to be used for improving undergraduate academic outcomes for underrepresented minority or disadvantaged students.Beyond Basic Training
Rodney Hopson and Prisca Collins of Duquesne University describe a new graduate internship program designed to develop leaders in the evaluation field and improve evaluators' capacity to work responsively in diverse racial and ethnic communities.Beyond Basic Training
Theodore Lamb, of the Center for Research and Evaluation at Biological Sciences Curriculum Study, discusses retrospective pretests and their strengths and weaknesses.
The New & Noteworthy section features an annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Evaluation Methodology.
This issue of The Evaluation Exchange was published by Harvard Family Research Project. The managing editor for the issue is Julia Coffman, consultant, and the contributing editor is Erin Harris, research analyst. It was produced by Stacey Miller, publications/communications manager, and Tezeta Tulloch, publications editor. All rights reserved. This periodical may not be reproduced whole or in part without written permission from the publisher. To request reprint permission, email firstname.lastname@example.org.
Harvard Family Research Project gratefully acknowledges the support of the Annie E. Casey Foundation, the Marguerite Casey Foundation, the Ewing Marion Kauffman Foundation, the W. K. Kellogg Foundation, the C. S. Mott Foundation, and the John S. and James L. Knight Foundation. The contents of this publication are solely the responsibility of Harvard Family Research Project and do not necessarily reflect the view of our funders.
Free. 20 Pages. [EEXI-2].