Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume VIII, Number 3, Winter 2002
Issue Topic: Public Communications Campaigns and Evaluation
Questions & Answers
Ethel Klein is a longtime campaign strategist and pollster. Currently, she is president of EDK Associates, a strategic research firm based in New York City. Dr. Klein has designed campaigns for nonprofit organizations and foundations on issues of women’s rights, low-income housing, environmental protection, gay and lesbian rights, work and family policies, health education, and tax reform. Prior to starting her own firm, she was a professor at Harvard University (1979–1984) and Columbia University (1984–1990). She is the author of several books, including Gender Politics (1984) and Ending Domestic Violence: Changing Public Perceptions/Halting the Epidemic (1997).
Q: What are the main ingredients of successful public communication campaigns that are designed to change behavior?
A: The first and most common ingredient would be increasing knowledge and awareness. If people don’t recognize the issue or problem, then that is where you start. If people are aware of the issue, but it’s not important to them, then you work on increasing its saliency.
But you can’t get stuck there. Too many people think that if you deliver the bad news, people are going to be rational and change their behavior. That will change some people’s behavior, but it is never enough. There is an accumulation of knowledge now that says you have to go beyond awareness in order to make real change.
You have to change behavior by giving people something to do to change it. The trick is figuring out where the locus of responsibility for the behavior change should be. Is it purely on the individual performing the behavior or is it also on creating public will and social responsibility to help make that change happen? You have to look at the issue’s epidemiology and ask, “What are the root causes and what sustains the behavior?”
What you’ll find is that to get results you have to focus on the social context surrounding the behavior and increase social responsibility for helping to change it. For example, with AIDS, just saying persuasively “AIDS will kill you” did not change behavior. Scare tactics just moved people into denial. We learned, from a great deal of public education research and evaluation, to create a broad sense of social responsibility for ending the epidemic and to engage, for example, the gay community in creating a social context where people wouldn’t be afraid or ashamed or worry about retribution if they insisted someone use a condom.
Another example is the gun control movement. Support for gun control increased when the question changed from “Why did the kid pull the trigger?” to “ How did the kid get the gun?” What helped was reframing from the individual action of stopping the kid from pulling the trigger to keeping the gun away from him in the first place. Once that reframing took place, a lot of social solutions followed, like licensing, registration, and trigger locks.
Once you’ve worked on social context and made individuals around a problem responsible, the question becomes “How do we codify it and make it law?” The emphasis on social context and responsibility will often provide the rationale needed to get laws passed. It’s like the example of secondhand smoke. Laws were passed once we got people to say, “I want you to stop smoking because it is going to kill me.” Getting large numbers of people to say this provided the rationale for laws that now ban smoking in public areas.
The Mothers Against Drunk Driving (MADD) is a good example of a campaign that worked on all of these elements over time. Its evolution shows the need for a strategic progression to how campaigns unfold and what elements (e.g., awareness, saliency, social responsibility) they work on and when. One of the MADD campaign’s early hallmarks was to work on public awareness and saliency by letting people know how many kids were killed in drunk driving accidents. This was followed by ads focused on individual responsibility that said, “Don’t drink and drive.” When that approach hit its limit in terms of results, the campaign moved on to the designated driver concept. It put the issue in a social context and said, “Because this behavior has social consequences, we have a right to put some constraints on it.” So the campaign focused on getting people to say that somebody needs to be the designated driver and that people need to make sure their friends don’t drink and drive. The first question after a drunk driving incident now is, “Why wasn’t there a designated driver?” or “Why didn’t someone take away the keys?” This acceptance of social responsibility provided the rationale needed to get stricter laws passed.
Q: What do you think are the main challenges for the evaluation of public communication campaigns?
A: One is seeing another side to the external approach in which evaluators say, “We take measurements, we’re completely objective, and we come in at the end and give you information.” That’s a very good approach if you’re not looking to help create change. Evaluators become seduced by the campaign because it’s challenging and they’re learning new things. But by the time they offer information the campaign is done and people don’t want to hear it. The only way evaluations will really help campaigns is if they can help them as they’re happening.
Campaigns are evolving and living things; they need to respond to what does and does not work. Evaluation is one of the few places campaigns can get that information. The challenge is to find a way in which you make evaluations dynamic and a part of the campaign process, while remaining objective. Let people know what’s working and what’s not working and how it can be fixed.
Evaluation should also help campaigns innovate. Many campaigns are stuck in a model, trying to replicate, for example, the MADD and anti-smoking campaigns. The problem is that those campaigns happened at a time when people weren’t saturated with public service announcements. The more people replicate them, the less effective they are. We need to find new models and evaluation should help us find them.
For example, I worked on a domestic violence campaign that wanted to organize the business and faith-based communities. The campaign’s approach initially was to try to get invited to talk at business meetings or in front of congregations. They assumed that if they were successful in getting invited to speak, they’d done organizing. I suggested they needed to do more and work with the clergy, for example, and help them go through their youth programs and change the curriculum, give them books, and so forth. This approach takes an enormous amount of upfront work, but once it’s in place it replicates itself.
The campaign staff resisted this idea. So I asked them, “How will you know your approach is successful? Let’s pick a measure and see how well it is working.” One indicator was how many of their toolkits people picked up or called in for. They found these numbers were small. They learned that getting people to intervene, become active, leave a palm card, put up a poster, put on a bumper sticker, etcetera takes a persistent sales job. The message needs to be repeated constantly and the action needs to be supported. So they modified their approach. This is the level of involvement a constructive evaluation needs to have.
If you can build a process that’s truly a learning environment and where people can say, “That’s not working; let’s figure out what else we can do,” then that’s a contribution. That’s how a campaign passes on its lessons.
Q: What do you think the field needs to do in order to move forward on the evaluation of public communications campaigns?
A: I’d like to see evaluation and campaign planning done at the same time, where the evaluators are part of the campaign design and implementation team. One of the big challenges of a learning evaluation is getting all of the players to be team members. To do this we need evaluators who are good at evaluation and good at campaigns, and that’s a small group. Campaign staff have to believe evaluators are giving them sound and strategic advice.
I’d also like to see campaigns get beyond awareness. I’d like to see the campaign team, with the evaluators, pick an issue where awareness is not the problem and struggle together with how to approach it. Again, for most of the issues we care about, knowledge and awareness are not the problem. We need to struggle with increasing and measuring public will and action.
Another challenge is that evaluation focuses too much on campaign outputs, or the things about campaigns that are easily measured, like the number of ads we develop and how many times they run on television. We need to focus more on outcomes like issue saliency and social responsibility that both contribute to learning and really capture what the campaign is trying to achieve. The caution here is to develop realistic and useful interim outcomes. Campaigns have to build momentum and support, and both campaign staff and evaluators need to remember that that takes time.