Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
All Publications & Resources
An introduction to the issue on Democratic Evaluation by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
Tim Ross, Research Director at the Vera Institute of Justice, explains Vera's rigorous and multitiered data collection process and the benefits of partnerships with public programs.
Kathleen McCartney and Heather Weiss of the Harvard Graduate School of Education describe the conditions for evaluations to maintain scientific integrity and serve the public good despite a politicized environment.
Dennis Arroyo describes the performance-monitoring mechanisms that nongovernment agencies use to make public officials accountable to citizens.
Arnold Love and Betty Muggah describe how Hamilton Community Foundation applied democratic evaluation principles to transform challenged neighborhoods into vibrant communities.
Ernest House, Emeritus Professor at the University of Colorado, argues that democratic evaluation calls for more ingenuity than other forms of evaluation and that as a result its methods can take many forms.
The New & Noteworthy section features an annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Democratic Evaluation.
Anju Malhotra and Sanyukta Mathur from the International Center for Research on Women describe a study in Nepal that compared participatory and more traditional approaches to evaluating adolescent reproductive health interventions.
This web only version of the New & Noteworthy section features an expanded annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Democratic Evaluation.
Kristine Lewis shares Research for Action's experience with training youth to use social science research methods in their campaigns to im-prove their local high schools.
Cheryl MacNeil, an evaluation consultant, describes the asymmetries of power in evaluation and her efforts to make her evaluation practice more democratic.
Andrew Nachison, director of the Media Center, an organization that studies the intersection of media, technology, and society, writes about social capital and democratic processes in a digital society.
This issue of The Evaluation Exchange periodical focuses on democratic evaluation. At the forefront of the discussion are equity and inclusion in the evaluation of programs for children, families, and communities, as well as evaluation to promote public accountability and transparency. Katherine Ryan leads off the issue by presenting major theoretical approaches to democratic evaluation. Several contributors examine these different strands, highlighting the importance of power sharing. Jennifer Greene emphasizes the importance of broad inclusion of stakeholder perspectives in evaluations, while Saville Kushner offers guidelines for people and communities to help evaluation reposition itself as a collaborative effort and thereby begin to address the crisis in public trust between the professional bureaucracy and citizens. Kathleen McCartney and Heather Weiss focus on public accountability, especially the conduct of flagship evaluations to maintain their scientific integrity while also serving the public good. Several contributors provide practical methods and tools to promote democratic evaluation, including the facilitation of dialogue, the training of youth researchers, the use of photovoice and cell phone technology, and access to interactive information through the Internet.
Seema Shah, a researcher at the Institute for Education and Social Policy, shares her experience of engaging community organizing groups to develop a logic model on how community organizing leads to better student outcomes.
Katrina Bledsoe of the College of New Jersey writes about the inclusion of student voices in the evaluation of an obesity prevention program
Saville Kushner of the Centre for Research in Education and Democracy at the University of the West of England suggests ways that an evaluation's participants can make evaluations more democratic.
This 2-day meeting brought together the perspectives of diverse stakeholders to inspire new ideas and foster stronger links between research, practice, and policy. Participants discussed issues of access, quality, professional development, the role of evaluation research, and systems-building efforts.
The John S. and James L. Knight Foundation and Wellsys Corporation describe how they plan to aggregate lessons learned across a "thematic cluster" of youth development investments.
An introduction to the issue on Evaluation Methodology by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
Teresa Boyd Cowles of the Connecticut Department of Education offers self-reflective strategies evaluators can use to enhance their multicultural competency.
Andrea Anderson is a research associate at the Aspen Institute Roundtable on Community Change, where she focuses on work related to planning and evaluating community initiatives.
Mehmet Öztürk discusses findings from a review of evaluations of programs at selective colleges and universities to be used for improving undergraduate academic outcomes for underrepresented minority or disadvantaged students.
Gary Henry makes the case for a paradigm shift in how we think about evaluation use and influence.
Rodney Hopson and Prisca Collins of Duquesne University describe a new graduate internship program designed to develop leaders in the evaluation field and improve evaluators' capacity to work responsively in diverse racial and ethnic communities.
Patricia Rogers of the Royal Melbourne Institute of Technology describes how a theory of change can provide coherence in evaluating national initiatives that are both complicated and complex.