Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Volume VIII, Number 3, Winter 2002
Issue Topic: Public Communications Campaigns and Evaluation
Ask the Expert
Gary T. Henry is a professor in the Andrew Young School of Policy Studies and the Department of Political Science at Georgia State University. He serves as co-editor-in-chief of the journal, New Directions for Evaluation, and recently co-authored Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs (2000, Jossey-Bass).
We are just beginning to learn how to grapple with the evaluation of public communication campaigns. Evaluators have been largely focused on evaluating direct services, and campaigns’ use of information-related instruments to change attitudes and behaviors are relatively foreign to the evaluation community. Evaluating public information campaigns is replete with challenges that will require new developments.
First, we should be looking at the literature in the fields of public opinion, marketing, psychology, and public health. Evaluators have an empirical base that we can tap and researchers for these domains have already uncovered a lot about social marketing and behavior change campaigns. We ought to use that research base to learn about conceptualizing and measuring campaign outcomes and how these outcomes interrelate.
Second, almost every campaign seems to be based on the assumption that if we can just get people educated about an issue that they will care more about it. We need to challenge that belief. Existing research indicates that our interests and emotions lead our thinking and we are prone to learn more about things we are interested in. Thus far, people have relied too much on awareness as an outcome and not considered salience or the extent to which the target audience members are personally concerned with an issue.
These concepts and pathways are complex and it’s a real challenge to get people to be specific. Terms like knowledge and awareness get tossed around, and they are viewed as having a causal relationship with attitude change. We need to unpack the interim outcomes that lead to the ultimate outcomes, such as policy change or behavioral change.
Third, we need to get better at evaluating the grassroots level work that often accompanies communication campaigns. An often-used notion with campaigns is combining “air” and “ground” strategies. The air strategy is the public media campaign and the ground strategy is grassroots organizing. It is really important to do more quantitative and qualitative work to develop outcomes and pathways at the grassroots level. It is very important not just to measure the uptake of the messages by the public, but whether or not the grassroots activities are really enhanced by having the media campaign.
Fourth, our tools and methodology in this arena are vastly deficient. For example, I worked on the evaluation of the Voluntary Ozone Action Program, the Georgia Department of Natural Resources campaign to improve Atlanta’s air quality by reducing behaviors that contribute to ground-level ozone. We faced some methodological challenges and couldn’t find techniques that would reliably estimate reductions in driving on alert days.
We ended up testing, to good effect, the use of rolling sample surveys, which use daily surveys to obtain measures of target outcomes—attitudes and behaviors—from an independent sample of individuals surveyed each day. This method tracks the day-to-day shifts in public opinion and behavior and enables evaluators to create natural experiments based on when campaign events or media coverage will take place (the treatment occurs on the days when campaign events take place; comparisons are on days when no campaign events occur).
Henry, G. T., & Gordon, C. S. (2001). Tracking issue attention: Specifying the dynamics of the public agenda. Public Opinion Quarterly, 65, 157-177.
Henry, G. T., & Gordon, C. S. (in press). Driving less for better air: Impacts of a public information campaign. Journal of Policy Analysis and Management.
Finally, I think we need to work more on how to evaluate the public policy change that many campaigns pursue. Usually before a policy is adopted you will see, for example, press releases from key leaders and legislative hearings. We have to be open to doing content analysis of those releases and testimony at the hearings to see the extent to which they were influenced by the campaign. We need to develop our techniques for tracing impacts and test them. We shouldn’t simply ask, “Were we in the newspaper?”
We should not be daunted by the methodological challenges of evaluating campaigns. We have to push ahead; we have to try some new things. We have to put data collection strategies into the field even if they are imperfect, try them, and work on their development. We need evaluators who are going to be in this for the long haul because we need to learn from our failures and improve; we need to share our experiences and move forward together.
Gary T. Henry
Andrew Young School of Policy Studies
Georgia State University
Suite 1030 Urban Life Building
Atlanta, GA 30302