Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Maria Elena Figueroa from the Johns Hopkins University Center for Communication Programs reveals the Center’s methods for evaluating communication campaigns and offers five examples of their evaluations in progress.
Allison H. Fine is a senior fellow at Demos, a network of action and ideas based in New York City. She writes and speaks on increasing civic participation by harnessing the power of digital technology. In 2006, she published her latest book, Momentum: Igniting Social Change in the Connected Age.
The evaluation of the Center for Tobacco-Free Kids gathered data from a wide range of audiences that the advocacy organization targets in order to influence public policy.
Ian Fordham, Pam Boyd, and Tony Apicella of ContinYou, a leading youth development organization in the United Kingdom, describe their efforts to improve quality in OST programming nationwide.
Susan Frankel of RMC Research Corporation outlines the evaluation of Connecticut's school-based family resource model.
Lucy Friedman describes how a collaborative after school initiative links with universities and families to promote college and career preparation among middle school youth.
Peter Frumkin of the University of Texas at Austin describes the five primary ways in which funders define scale as it relates to nonprofits’ efforts to create a lasting and significant impact, and warns that strategic giving requires a nuanced stance grounded in a clear understanding of the many meanings—and limits—of scale.
Susan Fuhrman, Dean of the Graduate School of Education at the University of Pennsylvania, answers three questions on the challenges—political, technical, and financial—states face in developing accountability systems.
Thomas Gais of the Rockefeller Institute of Government discusses the use of information technology in welfare reform.
Authors from the Institute for Health Policy Studies at the University of California, San Francisco describe how they used both macro-level and individual grantee logic models to drive the evaluation design of the Clinic Consortia Policy and Advocacy Program.
Benoît Gauthier talks about the ways electronic collaboration tools are facilitating evaluation around the world.
Barbara Gebhard of Build describes the initiative's interactive evaluation approach.
Paul Gertler, Harry Patrinos, and Marta Rubio-Codina summarize a study on the outcomes associated with a school-based management intervention in Mexico.
Ken Giunta and Todd Shelton of InterAction answer HFRP's questions about their approaches and ideas on evaluating advocacy.
Two evaluators from SRI describe the benefits realized by the Parent Institute for Quality Education when they prefaced their summative evaluation with a formative evaluation.
Based on their research with community-organizing groups, Eva Gold and Elaine Simon from Research for Action and Chris Brown from the Cross City Campaign for Urban School Reform describe four strategies for building public accountability for education.
Steven Goodman, director of the Educational Video Center and author of Teaching Youth Media, describes a program that teaches media literacy and documentary production skills to youth in New York City, with an eye toward fostering civic engagement.
Jennifer Greene, Associate Professor at Cornell University, discusses a framework for planning and implementing mixed-method evaluations.
Luis Carlos Greer and Tamara Martinez, youth living in Arizona, describe how they got involved by working with a local community organization to make a change in their community.
Geneva Haertel and Barbara Means of SRI International suggest ways evaluators and policymakers can work together to produce “usable knowledge” of technology’s effects on learning.
Joe Hall, President of Banana Kelly International, and Marianne Cocchini, Founder of AER/MAC Consulting, write about evaluation as a learning enterprise for a CBI.
Erin Harris of HFRP outlines what information the HFRP Out-of-School Time Evaluation Database includes, how it is organized, and its practical applications.
Philip Harris and Lori Grubstein of the Crime and Justice Research Center describe the “bottom-up” development of ProDES, an outcome-based information system that tracks youth in the juvenile justice system.
JuNelle Harris of HFRP outlines the basics of designing logic models.
Andrea Anderson is a research associate at the Aspen Institute Roundtable on Community Change, where she focuses on work related to planning and evaluating community initiatives.