Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume VIII, Number 3, Winter 2002
Issue Topic: Public Communications Campaigns and Evaluation
Ask the Expert
Dr. Sharyn Sutton, President, and Elizabeth Heid Thompson, Vice President, of the social marketing firm, Sutton Group, in Washington D.C. have worked on the research, strategic planning, and execution of numerous social change efforts and public service campaigns.
We’ve seen a lot of money go to evaluating communication efforts that are not helpful to the program implementers or their funders. Like all social change efforts, public communication campaigns are complex initiatives that operate in a noisy setting. Yet many evaluations try to apply a strict causal paradigm to detect cause and effect relationships. Although trying to understand this relationship is admirable, the very nature of public communication campaigns often makes doing so impossible.
Evaluating for-profit advertising campaigns is easier because firms running the campaign have much more control over the intervention. They develop the message and advertisements, are able to direct the campaign to reach the target audience, and can control the level of media saturation. Designing an evaluation of this controlled approach is relatively straightforward.
By contrast, public communication campaigns frequently do not have this luxury of control. They cannot afford paid media or buy enough of it to reach saturation. As a result, public campaigns rely on public service announcements that may or may not reach their target audience, attempt to get earned media, or recruit partners who may distort or weaken the message. In other words, they lose control. In addition, advertising is just one aspect of public communication campaigns. These campaigns tend to be embedded in broader social change efforts that employ a range of intervention tactics, including public relations and policy advocacy.
Given the multifaceted and dynamic nature of these campaigns, trying to apply a strict causal evaluation to them becomes a very difficult, if not impossible, task. Although a laudatory goal, we have seen many evaluations fall short because of the emphasis on determining specific causal relationships by relying on experimental or quasi-experimental designs.
These evaluations tend to set up artificial controls, lack flexibility to change with the evolving campaign, and cannot separate the effects of a specific initiative versus other activities aimed at the same goals.
Resources are available at www.suttonsocialmarketing.com.
Austin, E. (2001, January). Profile: Sharyn Sutton, Ph.D. Advances [The Robert Wood Johnson Foundation’s quarterly newsletter], 1, 4.
Balch, G. I., & Sutton, S. M. (1997). Keep me posted: A plea for practical evaluation. In M. E. Goldberg, M. Fishbein & S. E. Middlestadt (Eds.), Social marketing: Theoretical and practical perspectives (pp. 61-74). Mahwah, NJ: Lawrence Erlbaum Associates.
Sutton, S. M., Balch, G. I., & Lefebvre R. C. (1995). Strategic questions for consumer-based health communications. Public Health Reports, 110, 725–733.
Sutton, S. M., & Thompson, E. (2001). An in-depth interview study of health care policy professionals and their information needs. Social Marketing Quarterly, 7(4), 16-26.
The paradox is that these evaluations attempt to attribute causality in an environment that does not provide them with sufficient information to make causal claims. Perhaps more importantly, they don’t tell us why a campaign did or did not work, which limits our ability to learn and influence future efforts.
Evaluations need to better reflect the real life settings in which public communication campaigns operate. It’s time for a shift away from a causal evaluation paradigm to a social change one. Social change evaluations more closely track and assess a campaign’s activities and interim results and link them to its ultimate goals. They not only look at what happened before and after the campaign, but also assess interim tactical progress so that it can feed data back into the program to improve its chance of success.
For example, not only should an evaluation track changes in attitudes and awareness at the end of a campaign, it should also monitor whether the target audience is exposed to the message and heard it. Measuring only awareness and/or behavior change doesn’t let you know if a campaign failed because of a poor message, lack of message saturation, or some other cause, such as poor implementation. By linking the interim results to a campaign’s longer-term goals, the evaluation can help campaign directors pinpoint areas that need to be improved as well as make informed judgments about its success.
Sharyn Sutton, Ph.D., President
Elizabeth Heid Thompson, Vice President
The Sutton Group
4590 MacArthur Boulevard, N.W., Suite 200
Washington, DC 20007