urban institute nonprofit social and economic policy research

Using Outcome Information

Making Data Pay Off

Read complete document: PDF


PrintPrint this page
Share:
Share on Facebook Share on Twitter Share on LinkedIn Share on Digg Share on Reddit
| Email this pageE-mail
Document date: July 31, 2004
Released online: July 31, 2004

The nonpartisan Urban Institute publishes studies, reports, and books on timely topics worthy of public consideration. The views expressed are those of the authors and should not be attributed to the Urban Institute, its trustees, or its funders.

Note: This report is available in its entirety in the Portable Document Format (PDF).


Preface

"Program outcome measurement helps organizations increase the effectiveness of their programs and communicate the value of what they do."

The above quote is a common mantra in the outcome measurement field. However, after investing a great deal of effort in identifying and measuring outcomes, analyzing data, and reporting results to the funders that often prompted the effort in the first place, many nonprofit organizations do not take the next steps that will make the effort truly pay off. The emphasis on collecting the data needs to be complemented by a commitment to using it. This final guide in the Urban Institute's series on outcome management for nonprofit organizations shows how to put the data to use.

Systematic use of outcome data pays off. In an independent survey of nearly 400 health and human service organizations, program directors agreed or strongly agreed that implementing program outcome measurement had helped their programs

  • focus staff on shared goals (88%);
  • communicate results to stakeholders (88%);
  • clarify program purpose (86%);
  • identify effective practices (84%);
  • compete for resources (83%);
  • enhance record keeping (80%); and
  • improve service delivery (76%).

Based on their experiences, 89 percent of these directors would recommend program outcome measurement to a colleague.

Such benefits do not just happen, of course. These organizations set up procedures and set aside time to review and discuss their outcome findings regularly, making sure they received a return on the investment that outcome measurement represents for any group that pursues it. In the process, these organizations moved from passively measuring their outcomes to actively managing them, using the data to learn, communicate, and improve.

This guide offers practical advice to help other nonprofits take full advantage of outcome data, identifying a variety of ways to use the data and describing specific methods for pursuing each use. It is a valuable tool for ensuring that outcome measurement fulfills the promise of helping organizations increase their effectiveness and communicate their value.

Margaret C. Plantz
Director, Community Impact Knowledge Development
United Way of America


Introduction

The outcome data have been collected and the analyses completed—so, what is the next step? Now is the opportunity to benefit from the effort it took to get to this stage by moving from outcome measurement to outcome management. This next step occurs when an organization uses outcome information to improve services.

This guide provides ideas on various ways to use outcome information; others in this series provide help in selecting outcome indicators, collecting data, and completing analyses regularly.1 Exhibit 1 summarizes the various uses for outcome information.

Outcome data are most often used internally by nonprofit program managers. However, organizations have other important potential users, such as board members and direct service personnel. In addition, there are several potential external users, including clients, funders, volunteers, community members, and other nonprofit organizations providing similar services.

Nonprofit managers find outcome data most valuable after comparisons and analyses are completed, and possible explanations for unexpected findings are explored. Once these steps are taken, a report that clearly communicates the findings should be prepared for internal use within the organization. Several audiences, including both management and program staff, can use the information for many purposes.

While this guide focuses on internal uses of outcome data, it also reviews some important external uses—informing clients, volunteers, board members, services users, or donors and funders.

After all the collecting and analyzing of outcome information, the rewards are at hand. Nonprofits can now use that information to help improve programs and services, and provide better information to external stakeholders. This guide is designed to help nonprofits cross into performance management by fully using their outcome information.


EXHIBIT 1

Uses for Outcome Information

Detecting Needed Improvements
Use 1. Identify outcomes that need attention
Use 2. Identify client groups that need attention
Use 3. Identify service procedures and policies that need improvement
Use 4. Identify possible improvements in service delivery

Motivating and Helping Staff and Volunteers
Use 5. Communicate program results
Use 6. Hold regular program reviews
Use 7. Identify training and technical assistance needs
Use 8. Recognize staff and volunteers for good outcomes

Other Internal Uses
Use 9. Identify successful practices
Use 10. Test program changes or new programs
Use 11. Help planning and budgeting
Use 12. Motivate clients

Reporting to Others
Use 13. Inform board members
Use 14. Inform current and potential funders
Use 15. Report to the community


Completing the Data Analyses

The raw data must be combined and summarized for use and interpretation, typically involving comparisons over time or with targets. Often, information from basic, easily calculated analysis can help program managers and staff draw conclusions and guide improvement actions. However, drawing conclusions from the data requires judgment and often depends on experience or an intuitive response to the analysis.

For example, is a decrease in teen mother attendance at prenatal visits from one quarter to the next the same across categories of clients and for different staff members? It may be important to identify results by client characteristics (ethnicity; age; income level; living situation—with parents, on her own, or with the father of her child; employment or educational status), as such characteristics may be associated with different outcomes. It is also important to assess the difficulty of helping different types of clients; the level of improvement is related to the challenges of a particular client group. Similarly, a review of outcomes for clients by individual staff member might help identify successful practices or a need for additional training or supervision. Summarized data will mask the fact that some types of clients have substantially poorer outcomes than others. If the types of clients with relatively poor outcomes are identified, then steps to address problems can be devised. Again, staff reviews must account for the difficulty of serving the client, or there may be an incentive to improve performance by avoiding hard-to-serve clients.

Typical analyses that can indicate how well individual programs are performing, provide information on areas needing improvement, and identify programs or staff meriting recognition include the following:

  • Comparing recent outcomes with those from previous reporting periods;
  • Comparing outcomes to targets, if the program has set such targets;
  • Comparing client outcomes grouped by client characteristics, such as age, gender, race and ethnicity, education level, family income, public assistance status, household size, and so on;
  • Comparing client outcomes grouped by various service characteristics, such as amount of service provided, location/office/facility at which the service was provided, program content, or the particular staff providing the service to the client; and
  • Comparing outcomes with outcomes of similar programs in other organizations, if available.

If clients are surveyed, an "open-ended" question is often very useful. Client reasons for poor ratings or suggestions to improve services can be grouped and summarized to help program managers and staff members pinpoint service areas to improve. For example, one employment training program surveyed employers of its clients to identify specific areas in which clients needed training. Based on the employer responses, the program added a customer service component to its training.

All these analyses convert the data collected into useful information.2

Seeking Explanations

A major function of outcome measurement is raising questions. A review of the outcome data report should lead to discussions about the program. Whether outcomes appear very good or poor, nonprofits need to seek explanations before taking action, as these explanations can provide important guidance on what needs to be done.

External factors sometimes affect program outcomes. Changes in local economic conditions can affect employment opportunities for graduates of a job training program, or changes in the mix of clients entering a program (usually not controlled by the nonprofit) can affect program results. Internal factors, such as personnel turnover, facility conditions, and changes in program funding, could also affect outcomes.

Exhibit 2 suggests ways to search for explanations. Only the last suggestion, an external in-depth evaluation, is likely to incur significant added expense.

Typically, program personnel are in the best position to understand why the outcome data are the way they are. For example, service workers in a teen mother program may know that their clients' typical reasons for missing prenatal visits are forgetting they had an appointment and difficulty obtaining transportation.

Participants at regular performance review meetings (such as "How Are We Doing?" sessions) can help identify why problems have occurred. These meetings can also be used to generate suggestions for improvements and specific actions to improve future outcomes.

If more time is needed, an organization can form a working group of staff members. For example, if unsatisfactory outcomes on delivery times and meal quality for a senior meals program were concentrated in one area, a working group might explore factors such as traffic congestion, patterns of one-way streets, or housing characteristics (such as needing to use stairs or elevators to deliver meals). The group could also develop recommendations that could lead to improvements, such as modifying a delivery route.

Sometimes, clients are asked to rate specific characteristics of services, such as waiting times; helpfulness or knowledge of staff; adequacy of information on the program; and location and accessibility of the facility (including proximity to public transit). Their responses may hold clues to why outcomes are not as expected. Any open-ended comments, suggestions for improvements, or ratings on specific aspects of a program on a survey can provide further guidance.

Holding focus groups with clients or other stakeholders is another way to obtain helpful information for identifying the causes of certain outcomes. Participants can also provide suggestions for improvements. Typically, a small number (about 6-12) of current or former clients are invited to a 90-120 minute session to discuss the issue at a time and place convenient for them.3


EXHIBIT 2

Seeking Explanations for Unusual Outcomes

  • Talk with individual service delivery personnel and supervisors
  • Hold "How Are We Doing?" meetings (group discussions with personnel) to review results
  • Form a working group of staff, and perhaps volunteers, to examine the problem
  • Review client responses to survey questions that asked clients to rate specific service characteristics
  • Review responses to open-ended questions in client surveys and those that probe specific aspects of services
  • Hold focus group sessions with samples of clients to obtain their input
  • Consider performing an in-depth evaluation, if findings are particularly important and resources are available

In some cases, focus groups may reveal that the true nature of the problem differs somewhat from what is indicated by the outcome data. A focus group with senior citizens participating in a meals program might reveal that bland food was a problem, and that spicing up the food a bit could increase client satisfaction.

An outside, in-depth evaluation to identify reasons for a problem outcome will take the most time and require special funding unless volunteers or free assistance from a local college are available. This approach can also provide the most information, but may only be needed for major problems or if funds are available to support an independent evaluation.

Formatting the Report

Creating useful displays of the analyses of the outcome data is a key, but often poorly performed, next step. While special care needs to be taken with reports for external use, it is also essential to provide an easily understood version for internal use. The goal is a report that is clear, comprehensive, and concise, often a difficult balance. Users should be allowed to develop conclusions about performance without feeling overwhelmed with data.

Here are some tips to help present the information effectively:

  • Keep it simple.
  • Include a summary of major points.
  • Don't crowd too much on a page.
  • Avoid technical jargon and define any unfamiliar terms.
  • Define each outcome indicator.
  • Highlight points of interest on tables with bold type, circles, or arrows.
  • If feasible, use color to help highlight and distinguish key findings.
  • Label charts and tables clearly—titles, rows, columns, axes, and so on.
  • Identify source and date of the data presented and note limitations.
  • Provide context (perhaps a history or comparisons with other organizations or the community).
  • Add variety to data presentation by using bar or pie charts to illustrate points.

The first report should be for internal use and typically will be considerably more detailed than reports provided to external stakeholders. Nevertheless, much care needs to be taken to make this report fully clear and useful. After input from those in the organization, a version of the report for public use may be required, after considering the needs of those outside audiences.

About This Guidebook

This guide identifies basic uses of outcome information grouped into four sections. A fifth section focuses on some limitations of outcome measurement.

Detecting Needed Improvements presents ways to use the outcome information with staff members (and volunteers, where applicable) to identify where improvement is needed and ways to improve services, and thus outcomes, for clients.

Motivating and Helping Staff and Volunteers suggests uses aimed at encouraging staff and volunteers to focus on achieving better service outcomes and helping them improve outcomes.

Other Internal Uses covers additional ideas on how to use outcome data to improve service delivery procedures, provide feedback to assist in planning and budgeting, strengthen the organization's ability to sustain itself, and, over the long run, improve its service effectiveness.

Reporting to Others provides ways to use reports on outcomes to inform other stakeholders, including donors, funders, volunteers, board members, service users and clients, and community members.

Cautions and Limitations identifies some issues to consider before changing programs based on outcome information.

Notes from this section

1 See the rest of the Urban Institute series on outcome management for nonprofit organizations—Key Steps in Outcome Management, Developing Community-wide Outcome Indicators for Specific Services, Surveying Clients about Outcomes, Finding Out What Happens to Former Clients, and Analyzing Outcome Information—all available at http://www.urban.org.

2 This guide reproduces several exhibits from Analyzing Outcome Information to illustrate how nonprofits can use data that have been analyzed in various ways. Using Outcome Information focuses on using the information, while the previous guide focused on analyzing it.

3 Another guide in this series, Key Steps in Outcome Management, covers focus groups in more detail.

Note: This report is available in its entirety in the Portable Document Format (PDF).


Acknowledgments

This report was written by Elaine Morley and Linda M. Lampkin.

The report benefited greatly from the assistance, comments, and suggestions of Margaret C. Plantz of United Way of America and Harry P. Hatry of the Urban Institute. In addition, Patrick Corvington of Innovation Network provided initial guidance on the content of the report.

The editors of the series are Harry P. Hatry and Linda M. Lampkin. We are grateful to the David and Lucile Packard Foundation for its support.

We also thank the staff of the following nonprofit organizations, whose uses of outcome information are included in this report:

  • Big Brothers Big Sisters of Central Maryland
  • Boys and Girls Club of Annapolis and Anne Arundel County
  • Community Shelter Board
  • Crossway Community
  • Jewish Social Service Agency
  • KCMC Child Development Corporation
  • Northern Virginia Family Services
  • Northern Virginia Urban League
  • United Community Ministries
  • Volunteers of America

Examples of outcome information use by most of these organizations are drawn from Making Use of Outcome Information for Improving Services: Recommendations for Nonprofit Organizations (Washington, DC: The Urban Institute, 2002) and How and Why Nonprofits Use Outcome Information (Washington, DC: The Urban Institute, 2003). Both publications provide additional examples and detail.



Topics/Tags: | Nonprofits


Usage and reprints: Most publications may be downloaded free of charge from the web site and may be used and copies made for research, academic, policy or other non-commercial purposes. Proper attribution is required. Posting UI research papers on other websites is permitted subject to prior approval from the Urban Institute—contact publicaffairs@urban.org.

If you are unable to access or print the PDF document please contact us or call the Publications Office at (202) 261-5687.

Disclaimer: The nonpartisan Urban Institute publishes studies, reports, and books on timely topics worthy of public consideration. The views expressed are those of the authors and should not be attributed to the Urban Institute, its trustees, or its funders. Copyright of the written materials contained within the Urban Institute website is owned or controlled by the Urban Institute.

Email this Page