Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Sally Leiderman, President of the Center for Assessment and Policy Development, explains how evaluation can be a tool to help communities and their partners do work in racial equity.
An introduction to the issue on Democratic Evaluation by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
The New & Noteworthy section features an annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Democratic Evaluation.
Katherine Ryan, Associate Professor of Educational Psychology at the University of Illinois, describes three approaches to democratic evaluation and argues that they can provide field-tested methods for addressing equity and inclusion issues in evaluations of programs for children, youth, and families.
This web only version of the New & Noteworthy section features an expanded annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Democratic Evaluation.
Kathleen McCartney and Heather Weiss of the Harvard Graduate School of Education describe the conditions for evaluations to maintain scientific integrity and serve the public good despite a politicized environment.
Tim Ross, Research Director at the Vera Institute of Justice, explains Vera's rigorous and multitiered data collection process and the benefits of partnerships with public programs.
Dennis Arroyo describes the performance-monitoring mechanisms that nongovernment agencies use to make public officials accountable to citizens.
Ernest House, Emeritus Professor at the University of Colorado, argues that democratic evaluation calls for more ingenuity than other forms of evaluation and that as a result its methods can take many forms.
Anju Malhotra and Sanyukta Mathur from the International Center for Research on Women describe a study in Nepal that compared participatory and more traditional approaches to evaluating adolescent reproductive health interventions.
Kristine Lewis shares Research for Action's experience with training youth to use social science research methods in their campaigns to im-prove their local high schools.
Jennifer Greene of the University of Ilinois talks about her efforts to advance the theory and practice of alternative forms of evaluation, including qualitative, participatory, and mixed-method evaluation.
Cheryl MacNeil, an evaluation consultant, describes the asymmetries of power in evaluation and her efforts to make her evaluation practice more democratic.
The John S. and James L. Knight Foundation and Wellsys Corporation describe how they plan to aggregate lessons learned across a "thematic cluster" of youth development investments.
Teresa Boyd Cowles of the Connecticut Department of Education offers self-reflective strategies evaluators can use to enhance their multicultural competency.
Mehmet Öztürk discusses findings from a review of evaluations of programs at selective colleges and universities to be used for improving undergraduate academic outcomes for underrepresented minority or disadvantaged students.
Rodney Hopson and Prisca Collins of Duquesne University describe a new graduate internship program designed to develop leaders in the evaluation field and improve evaluators' capacity to work responsively in diverse racial and ethnic communities.
Theodore Lamb, of the Center for Research and Evaluation at Biological Sciences Curriculum Study, discusses retrospective pretests and their strengths and weaknesses.
The New & Noteworthy section features an annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Evaluation Methodology.
An introduction to the issue on Evaluation Methodology by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
Mel Mark, professor of psychology at the Pennsylvania State University and president-elect of the American Evaluation Association, discusses why theory is important to evaluation practice.
Robert Penna and William Phillips from the Rensselaerville Institute’s Center for Outcomes describe eight models for applying outcome-based thinking.
John Bare of the Arthur M. Blank Family Foundation explains how nonprofits can learn about setting evaluation priorities based on storytelling and “sacred bundles.”
Abby Weiss from HFRP describes the tool that the Marguerite Casey Foundation offers its nonprofit grantees to help them assess their organizational capacity.
John A. Healy, Director of Strategic Learning and Evaluation at The Atlantic Philanthropies, shares ways to position learning as an organizational priority.