Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume IX, Number 3, Fall 2003
Issue Topic: Evaluating Community-Based Initiatives
Ask the Expert
Xavier de Souza Briggs is Associate Professor of Public Policy at Harvard University’s Kennedy School of Government and the Martin Luther King, Jr., Visiting Fellow in Urban Studies and Planning at MIT. He spent two years as the Deputy Assistant Secretary for Policy Development and Research at the U.S. Department of Housing and Urban Development during the Clinton Administration and is a member of the Aspen Institute’s Roundtable on Comprehensive Community Initiatives for Children and Families. He is the founder of the Art and Science of Community Problem-Solving Project at Harvard University (www.community-problem-solving.net), a new learning resource for people and institutions worldwide.
How can evaluation improve community building?
Evaluation can meet some critical knowledge needs—but there are limits. First and foremost, those who design, support, and carry out initiatives identified as “community building” perennially need help reflecting on what they really want to accomplish. Community building runs the risk of trying to be all things to all people. The phrase is so elastic that people tend to have vastly different assumptions and philosophies when they approach these efforts.
Community building in the public context is used to describe civic action to promote collective or community well-being. But it is also being used to describe business efforts to create more value for customers by leveraging social connections of various kinds, such as shared tastes in books or home improvement. The online environment is well suited to building community in that way; such thinking underlies the development of successful websites that attract people and motivate them to return frequently.
I offer this example to illustrate how the term “community building” has really become appropriable by such varied fields, each with a set of values and agendas, each bedeviled by a desire to own the phrase. What's more, specific agendas and priorities vary within civic category. To me, community building in America has always reflected two main agendas. One is changing political relationships and political power. The other is changing social outcomes. These two sometimes reinforce one another, but sometimes they compete. Evaluation is much better suited to outlining and testing claims about the second agenda.
What lessons does this suggest for evaluation or evaluators?
When there are different or competing rationales and objectives in community-building work, evaluation can examine those rationales and specify the common threads among them. Community development needs people who can think critically and counter the pressure to focus only on building confidence. Both are crucial, of course, particularly where people mistrust collective work or feel too busy to get involved. But at the extremes, you have the problem of boosterism, wherein those who most need to think more critically about their work proceed from a set of strong but mostly unexamined assumptions. Boosters “spin” themselves on the value and promise of their work, too often with disappointing results.
Isn’t this where theories of change or logic models come into play, outlining expectations about causes and effects?
Exactly. One of the strengths of the theory of change approach is that it can help formalize parts of that process and give people conceptual footholds that are critical to a common, evidence-backed understanding of their community-building efforts.
The jargon associated with such approaches can still be off-putting. We have yet to fully translate it for use in community settings, though accessible theory of change work products by the Aspen Institute, Kellogg Foundation, and the Bridgespan Group, among others, really help.
With any new social technology, or set of ideas and ways of implementing them, we need at least as many technologists—people that are comfortable with the new ideas, recognize their limits, bring key ideas into common use, demystify it all—as we do manuals and formal justifications. I think the process of translation and diffusion will come in time, and evaluators can play a role in that.
Finally, outlining a theory of change is one thing; being able to align one’s operational systems to implement it is quite another. Where implementation must be coordinated across organizations or across parts of an organization, things only get more challenging.
Does evaluation have a role beyond revealing and testing assumptions about cause and effect?
Absolutely. There is an ongoing need to be clearer about who plays what role in a community-building effort. What are the unique capacities that each party involved in the initiative brings? What are their limits and learning needs? Evaluators can clarify the question of role and the coordination of roles. They can help examine the capacity of players to contribute to an initiative. Here though, the lines blur between most traditional program evaluation and the kinds of management assessments that consultants practice—real-time, improvement-focused data gathering and analysis. The earliest commentaries on community building, those by the Chapin Hall Center for example, discussed those distinct evaluator roles—helping improve practice versus rating effectiveness on behalf of the funders or regulators.
Shouldn’t evaluation focus on the objectives of core stakeholders?
Sure, but again, the implementing stakeholders, some of them potential beneficiaries or community clients, may hold a variety of assumptions that need to be clarified as well as tested. Funders as well as regulators—if we include in the mix government’s important function of protecting against waste, fraud, abuse of rights, etc.—are stakeholders too. Community building on the civic side emerged from the realization that grassroots stakeholders can bring important knowledge and capacity to the solution of social problems. More specifically, community building also emerged as a response to top-down, technocratic public policy, with its love of professional credentials, standardization, and government-defined routines and rules. The idea that funders and regulators have no right to make claims of these initiatives, however, is a recipe for parochialism, spotty performance, and even corruption.
You wrote in The Will and the Way¹ that we need better ways to engage both the grassroots and the “grasstops” in the aims and means of community building.
Yes. The changes we want to create do depend on mobilizing at the grassroots level, because it’s the smart thing to do, outcome-wise, and the politically just thing to do. Recall those two agendas. But community-change work, particularly if we want to see scale and sustained impact, also requires mobilizing the grasstops—the influentials, those with the formal authority and money and other resources—in a local community.
We ought to frame the process of community change as targeted to areas of deep need, where appropriate, but at the same time be fairly universalist in our values and offer the opportunity for everyone to get involved. We should appeal, where possible, to the enlightened self-interest of employers, hospitals, universities and other anchor institutions. Community problem solving is more and more about working out collective action and leveraging capacity across the public, nonprofit, and private sectors.So evaluators can't afford to think about community change work simply in terms of the neighborhood-level activities and impacts?
How can evaluation add value when so many agendas and levels are in play?
Community building can benefit from the learning and accountability purposes of evaluation. Evaluation can help the “doers” learn and hone their strategies, either through peer learning or by creating what has been called “a community of practice.” A recent book, Cultivating Communities of Practice,² discusses this concept, which grew out of the work done on knowledge management and social networks in the business world. The concept originated from a competitive need to be on the forefront of innovation. Communities of practice promote the removal of a rigid hierarchy where information is transmitted on a “need-to-know” basis, in favor of a flatter, more fluid learning and knowledge network.
Communities of practice differ importantly from teams that have a specific task to fulfill or an operational partnership across organizations. A community of practice may interface with a host of project teams, and it may lead to partnerships and alliances, but the community’s identity is defined by knowing and learning less than by doing in the sense of being driven by a deadline. Evaluators can offer a community of practice dimension to community-based initiatives, serving as knowledge sources in larger networks, so that information flows to help improve practice.
But evaluation’s second major role, that of external accountability, is becoming ever more important as well. The question “did we get a return on investment?” still turns some people off. However, it reflects the fact that those who invest resources confront demands for resources that outstrip supply and, as a result, have to make tough choices.
How do you reconcile this role with the “community-based” principles of community building?
Those that favor locally oriented, flexible work grounded in community-based organizations closest to the grassroots constituencies, or even in informally organized community groups that are not incorporated organizations, must present a credible response to accountability demands—welcoming the chance to improve work—while maintaining the right to push back. The latter may include pointing out that classic problems in measuring and performance do exist, such as measuring the wrong thing well and imagining that everything valuable can be counted. Some funders think measures and “metrics” always mean numbers. More useful are balanced performance dimensions, concepts leading to concrete measures followed by targets for those measures.
Continuing the evolution of the purely summative evaluation into a more grounded approach is critical. So is blurring the line between evaluation and management improvement or capacity building in general. This does not mean turning evaluators into mere cheerleaders for whatever those “closest to the ground” want to do.
Beyond evaluation per se, a huge need exists for grounded, reflective, practice-oriented professional development for those seeking to assist community builders—training the trainers and coaches, so to speak. Local practitioners are being compelled to ask tough questions: How do we create a meaningful, ongoing, balanced, and honest conversation about success on the issues we care about? How do we reconcile internal and external demands? How do we make use of the burgeoning toolbox—theory of change, community capacity, negotiation and consensus building techniques, one organizing philosophy or another—so that we have the right tools for the right job?
The strategy tools and other resources at www.community-problem-solving.net were created with these needs in mind. It’s not an age of information anymore, but information overload. People need help sorting out what counts.¹ Briggs, X. de S. (2001). The will and the way: Local partnerships, political strategy, and the well-being of America’s children and youth. Cambridge, MA: John F. Kennedy School of Government, Harvard University.