Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume XIII, Number 1&2, Spring 2007
Issue Topic: Advocacy and Policy Change
Beyond Basic Training
Marcia Egbert and Susan Hoechstetter offer nine principles to guide advocacy evaluation, based on a recent and groundbreaking Alliance for Justice tool on this topic.
When Alliance for Justice and Rosenberg Foundation began a project to equip funders with a practical way to evaluate advocacy back in 2002, little relevant research or methodology was available. Consequently, Alliance for Justice and The George Gund Foundation partnered to develop new tools that would be practical, flexible, and equally easy for grantmakers and grantseekers to use. The resulting 2005 publication, Build Your Advocacy Grantmaking: Advocacy Capacity Assessment & Evaluation Tools, became the first guide of its kind for nonprofit advocacy. The two new tools featured in the guide will be available online for the first time in April 2007.
In the time since the guide's publication, the field of evaluating advocacy has truly taken off. Multiple evaluation models are now available, and new work is continually emerging. As the field grows, it is important to remember the principles of simplicity, flexibility, and grantee partici-pation. We offer the following nine principles to guide evaluators and advocates in advocacy evaluation.
1. Keep it simple. A simple evaluation framework—even a checklist with a bit of narrative—based on advocacy experience is much more manageable for most nonprofits than complex evaluation requirements that unduly tax already sparse resources, particularly staff time.
2. Value capacity building as a key outcome measure. Very often, the most visible progress that results from advocacy work is the capacity built by a nonprofit. This capacity could include new coalitions formed, relationships gained with public policy decision makers, and skills developed to navigate complex legislative, judicial, executive branch, and election-related processes.
3. Flexibility is a strength, and “failure” to reach a big goal may actually produce important incremental gains. Perhaps the state's budget went into the red following a recession. Obtaining a desired increase in appropriations for child care programs may no be longer be feasible that year, but gaining enforcement of existing licensing requirements for higher quality of services might. The nonprofit that can change strategies when the external environment shifts is a stronger advocate. Achieving expected or unexpected benchmarks is important, given the long-term nature of much advocacy work.
4. Let the story be told. Understanding how and why the work unfolded as it did is central to gauging the success of advocacy activity. Telling the story provides a narrative to complement benchmarks by explaining the outside factors that caused the work to take the direction it did.
5. Be clear about evaluation expectations from the beginning of the grant review process. Grantseekers and grantmakers should mutually agree up front about what constitutes effective work and how much leeway grantees have to make choices that vary with the circumstances of their proposed work. The Capacity Assessment Tool can help clarify these expectations.
6. The sum is greater than the parts. Accepting this premise helps alleviate concerns about isolating a particular organization's precise contribution to an overall advocacy outcome. For example, unless an organization is the only one working on a particular policy issue, it may never be certain which organization's actions were the defining reason for a related policy outcome. Yet, the organization can identify specific ways in which the grantee's actions spurred or contributed to policymaking. For those who care about policy change, knowing that they or their grantee effectively influenced the outcome should be enough.
7. Measure influence in creative ways. Nontraditional evaluation methods can help meet the challenge of measuring influence. For example, staff members at The California Wellness Foundation deemed one grantee's public education campaign successful when they heard the California Attorney General reframe the issue in the same terms used by the public education campaigns. Other funders have sought the opinions of community members and legislators regarding how effective their grantees' efforts have been in influencing them. Typical ways to indicate influence might include an invitation for a nonprofit organization to testify at a legislative hearing or newly won support from a state agency official for changes in a regulation.
8. Evaluation requires time and/or money. Nonprofit advocates often have the best information available to evaluate their work, but when outside evaluators are needed, money must be allocated for them.
9. Understand foundations' potential nonmonetary contribution to advocacy activities. While some nonprofits will say they could have used more flexible or longer term funding, grantees may also seek funders' nonfinancial assistance in their advocacy efforts. For example, MAZON: A Jewish Response to Hunger contracted a consultant to evaluate its California Nutrition Initiative advocacy project. One question in their grantee survey was about in what other ways the funder could have helped its advocacy effort. MAZON learned that grantees most wanted introductions to public policy leaders.
As more funders tiptoe, walk, run, or gallop headlong into the world of funding public policy and advocacy, we hope these simple principles help alleviate a common worry that such work is impossible to measure.
Senior Program Officer
The George Gund Foundation
45 Prospect Avenue, West, Suite 1845
Cleveland, OH 44115
Foundation Advocacy Director
Alliance for Justice, 11 Dupont Circle, NW
Washington, DC 20036