Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
Indicators: Definition and Use in a Results-Based Accountability System
This brief defines and explores the role of indicators as an integral part of a results-based accountability system. The brief shows how indicators enable decision makers to assess progress toward the achievement of intended outputs, outcomes, goals, and objectives.
About This Series
These short reports are designed to frame and contribute to the public debate on evaluation, accountability, and organizational learning.
An indicator provides evidence that a certain condition exists or certain results have or have not been achieved (Brizius & Campbell, p.A-15). Indicators enable decision-makers to assess progress towards the achievement of intended outputs, outcomes, goals, and objectives. As such, indicators are an integral part of a results-based accountability system.
Indicators can measure inputs, process, outputs, and outcomes. Input indicators measure resources, both human and financial, devoted to a particular program or intervention (i.e., number of case workers). Input indicators can also include measures of characteristics of target populations (i.e., number of clients eligible for a program). Process indicators measure ways in which program services and goods are provided (i.e., error rates). Output indicators measure the quantity of goods and services produced and the efficiency of production (i.e., number of people served, speed of response to reports of abuse). These indicators can be identified for programs, sub-programs, agencies, and multi-unit/agency initiatives.
Outcome indicators measure the broader results achieved through the provision of goods and services. These indicators can exist at various levels: population, agency, and program. Population-level indicators measure changes in the condition or well-being of children, families, or communities (i.e., teen pregnancy rate, infant mortality rate). Changes in population level indicators are often long-term results of the efforts of a number of different programs, agencies, and initiatives. In some cases, rather than providing information about the results achieved by interventions, population-level indicators may provide information about the context in or assumptions under which these interventions operate. For example, the overall level of unemployment provides important contextual information for job placement programs. In this case, monitoring the unemployment rate allows stakeholders to correctly interpret program results. Agency-level indicators measure results for which an agency is responsible; program-level indicators measure the results for which a program or sub-program is responsible. Agency- and program-level outcome indicators are often defined more narrowly those pertaining to the population as a whole; for example, they may measure pregnancy rates among teenage girls in a given county or among girls receiving a given set of services. Identification of appropriate indicator levels ensures that expectations are not set unrealistically high.
Choosing the most appropriate indicators can be difficult. Development of a successful accountability system requires that several people be involved in identifying indicators, including those who will collect the data, those who will use the data, and those who have the technical expertise to understand the strengths and limitations of specific measures. Some questions that may guide the selection of indicators are:
Does this indicator enable one to know about the expected result or condition?
Indicators should, to the extent possible, provide the most direct evidence of the condition or result they are measuring. For example, if the desired result is a reduction in teen pregnancy, achievement would be best measured by an outcome indicator, such as the teen pregnancy rate. The number of teenage girls receiving pregnancy counseling services would not be an optimal measure for this result; however, it might well be a good output measure for monitoring the service delivery necessary to reduce pregnancy rates.
Proxy measures may sometimes be necessary due to data collection or time constraints. For example, there are few direct measures of school readiness. Instead, a number of measures are used to approximate this: children's participation in high quality developmentally appropriate preschool, parents' exposure to parenthood education services, and family literacy levels. When using proxy measures, planners must acknowledge that they will not always provide the best evidence of conditions or results.
Is the indicator defined in the same way over time? Are data for the indicator collected in the same way over time?
To draw conclusions over a period of time, decision-makers must be certain that they are looking at data which measure the same phenomenon (often called reliability). The definition of an indicator must therefore remain consistent each time it is measured. For example, assessment of the indicator successful employment must rely on the same definition of successful (i.e., three months in a full-time job) each time data are collected. Likewise, where percentages are used, the denominator must be clearly identified and consistently applied. For example, when measuring teen pregnancy rates over time, the population of girls from which pregnant teenagers are counted must be consistent (i.e., 10% of girls ages 12 to 18). Additionally, care must be taken to use the same measurement instrument or data collection protocol to ensure consistent data collection.
Will data be available for an indicator?
Data on indicators must be collected frequently enough to be useful to decision-makers. Data on outcomes are often only available on an annual basis; those measuring outputs, processes, and inputs are typically available more frequently.
Are data currently being collected? If not, can cost effective instruments for data collection be developed?
As demands for accountability are growing, resources for monitoring and evaluation are decreasing. Data, especially data relating to input and output indicators and some standard outcome indicators, will often already be collected. Where data are not currently collected, the cost of additional collection efforts must be weighed against the potential utility of the additional data.
Is this indicator important to most people? Will this indicator provide sufficient information about a condition or result to convince both supporters and skeptics?
Indicators which are publicly reported must have high credibility. They must provide information that will be both easily understood and accepted by important stakeholders. However, indicators that are highly technical or which require a lot of explanation (such as indices) may be necessary for those more intimately involved in programs.
Is the indicator quantitative?
Numeric indicators often provide the most useful and understandable information to decision-makers. In some cases, however, qualitative information may be necessary to understand the measured phenomenon.
A results-based accountability system often requires data on a number of different indicators, reflecting the information needs of different decision-makers. Legislators and senior agency staff frequently require information on long-term outcomes (and, in some cases, inputs) while program and provider staff require details on inputs, processes, and outputs as well as outcomes. For each indicator, baseline data need to be collected to identify the starting point from which progress is examined. Comparison of actual indicator results to anticipated levels (often called performance standards or targets) allows decision-makers to evaluate the progress of programs and policies. Assigning responsibility for indicator data collection to individuals or entities in an organization helps to assure that data will be regularly collected.
It is important to note that indicators serve as a red flag; good indicators merely provide a sense of whether expected results are being achieved. They do not answer questions about why results are or are not achieved, unintended results, the linkages existing between interventions and outcomes, or actions that should be taken to improve results. As such, data on indicators must be interpreted with caution. They are best used to point to results that need further exploration, rather than as definitive assessments of program success or failure.
General Information on Indicators
Brizius, J. A., & Campbell, M. D. (1991). Getting results: A guide for government accountability. Washington, DC: Council of Governors Policy Advisors.
Friedman, M. (1995 July). From outcomes to budgets: An approach to outcome based budgeting for family and children s services. Washington, DC: Center for the Study of Social Policy.
Oregon Commission on Children and Families. (1995). Outcome measurement notebook: 1995-1997. Portland, OR: Author. This notebook, designed to help local Commissions on Children and Families in Oregon to develop outcome measures, includes sections on the use of information in comprehensive planning, research on model programs, and sample measurement methods for frequently measured outcomes. To obtain information, contact: Oregon Commission on Children and Families, 800 NE Oregon Street, Suite 550, #13, Portland, OR 97232.
Price Waterhouse, Office of Government Services. (1992). Assessing the content and quality of performance measures. Washington, DC: Author. For more information call 202-296-0800.
Information on Child and Family Indicators
Child Trends, Inc.: Produces research and publications on indicators related to children and families. To obtain information contact: 4301 Connecticut Avenue, NW, Suite 100, Washington, DC 20008, tel: 202-362-5580, fax: 202-362-5533.
American Humane Association: Produces publications and sponsors an annual roundtable on outcome measures for child welfare services. To obtain information contact: 63 Inverness Drive East, Englewood, CO, 80112-5117, tel: 303-792-9900, fax: 303-792-5333.
Improved Outcomes for Children Project, Center for the Study of Social Policy: Developed a start-up list of outcome measures. To obtain information contact: 1250 Eye Street, NW, Suite 503, Washington, DC 20005, tel: 202-371-1565, fax: 202-371-1472.
Institute for Research on Poverty, University of Wisconsin-Madison: Sponsors annual conference on indicators of children s well-being; conference papers are available from the Institute. To obtain information contact: 1180 Observatory Drive, 3412 Social Science, Madison, WI 53706, tel: 608-262-6358, fax: 608-265-3119.
Free. Available online only.