Building Blocks

These tools help nonprofits identify what they are interested in measuring and how to track these indicators.

In this section
Logic Models and Theories of Change
Indicators
Target Setting


Logic Models and Theories of Change

A logic model is a tool that helps convey a program or project in a brief visual format. A theory of change articulates the underlying beliefs and assumptions that guide a program’s or organization’s strategy. Both tools can set the foundation for shared language, belief, and processes for program improvement, establish organizational priorities, and help manage resources. Programmatic mappings can guide an organization through its components, from inputs to outputs and outcomes. The process can help staff better understand their roles in accomplishing outcomes, improve implementation and management, and succinctly inform outside audiences about how your organization works and what your program is working toward. Organizations can develop a logic model, a theory of change, or both, and there are other formats and tools for mapping organizational activities and goals as well. To simplify the text below, we use the language of “logic models,” but the steps work regardless of the tool.

Lay the groundwork

  • Identify your audiences (such as program staff, senior leaders, and board members) and brainstorm methods to engage them in logic model development.
  • Take stock of your organization’s oral and documented history of logic model development. Identify what already exists and how the organization uses it.
  • Consider which programs need a logic model. Not all programs require their own logic models but create logic models for programs or groups of programs of sufficient size, scale, relevance, and continuity.
  • Look for example logic models from other nonprofits in your sector and beyond. Logic models with similar outcomes may be particularly useful.
  • Ground assumptions in research evidence about what works.

Develop

  • Involve program staff, senior leaders, and/or board members in logic model development.
  • Start with outcomes for each logic model, and then work backward to outputs, activities, and inputs. Ensure that the connection between inputs, activities, and outputs makes sense.
  • Write your logic model so it is understandable to a broad audience. Wherever possible, avoid program-specific jargon, acronyms only your organization will know, and subjective terms (e.g., adequate, sufficient, appropriate).
  • Tailor logic model formats and level of detail to relevant internal and external audiences. Consider alternative visualization approaches like layering or infographics.
  • Map links between each logic model component (activities → outputs → outcomes) as well as links from each component to your metrics and indicators.
  • Involve communications colleagues in graphic design to make your final logic model distinctive and attractive.

Vet

  • Review logic models with program staff, organizational leadership (including board members), and clients.
  • Convey to reviewers that building the logic model is a means to an end—of reflecting on program activities and their intended effects—not an end in itself.

Use and share

  • Ensure all staff review and have access to logic models and that logic models are part of staff onboarding procedures.
  • Post logic models to your organization’s website.
  • Don’t let your logic models collect dust on your shelf. Refer to logic models in program data review meetings to ensure alignment with logic model and data collection practices.

Review

  • Review logic models annually with each program lead and revise as needed.
  • Review with senior leadership, the board, and clients as needed, but at least every other year.

 

Indicators

Although a logic model helps identify what your organization is interested in measuring, it does not specify how to track those items. Monitoring and evaluation staff will need to develop indicators—a way to measure what you care about, usually expressed as a number, a percentage, or a rate—for all inputs, outputs, and outcomes. Output indicators track your program’s progress on various services or activities. Outcome indicators track your program’s successes or achievements. Some outcomes are easy to measure, others are not (e.g., graduation rates versus well-being). There are often multiple possible indicators for a single outcome. Sometimes you must settle for an indicator that best represents the outcome; these are sometimes called proxy indicators. 

Lay the groundwork

  • Review logic models, and update if needed.
  • Create an inventory of data currently collected by each program and flag which data are collected to satisfy funding and compliance requirements. Document characteristics about the data, including the following:
    • Who collects each dataset or indicator?
    • What are the sources of the data? (Where does the data come from?)
    • How often are the data collected, and when was the most recent time the data were collected? 
    • Who enters each dataset or indicator?
    • Where are the data stored?
    • Which data are sensitive or confidential? Are use or disclosure of the data restricted by local, federal, or organizational requirements?
  • Identify the gaps between the data you collect and the data you need to operationalize your logic models. Are there components of your logic models for which you don’t collect data?
  • Identify unnecessary data collection. Are there datasets that you routinely collect out of habit but you don’t analyze or use? If so, cut indicators as necessary.
  • Map out existing reporting methods (e.g., funder reports, internal quarterly or annual reports to program managers, senior leadership, or board members). Examine how useful these reports are and how easy they are to generate and to understand.
  • Examine indicators used by peer organizations and in the field. Take advantage of resources that offer commonly tested indicators such as the Outcome Indicators Project and Success Measures.

Develop

  • Develop at least one indicator for each input, activity, output, and outcome associated with each logic model component. Be sure to distinguish between output and outcome indicators and refine your logic models according to insight gained when developing indicators.
  • Create indicators that account for ease of access to or availability of data sources, cost to obtain data, time frame, and reporting lags. Keep in mind that a single outcome often has multiple possible indicators. Sometimes you must settle for an indicator that best represents the outcome; these are sometimes called proxy indicators.
  • Define time period, denominators, population, source, and data collection method for each indicator.
  • Identify audiences for each indicatorthat is, staff who need or would be helped by seeing indicator data, and use this information to determine your methods for reporting on indicators (e.g., dashboards, quarterly reports, annual reports, board meetings).
  • Describe indicators in relation to data collection processes for the program. Try to use the same language that programs use to describe indicators.
  • Add new indicators to your data inventory, taking care to identify which indicators satisfy contracting requirements.

Vet

  • Quality-check your indicators. The SMART criteria (specific, measurable, attainable, relevant, and time-bound) can be helpful here.
  • Collect feedback on your indicators from relevant program staff and leadership, especially staff with data entry, analysis, and reporting responsibilities. Discuss with staff the potential burden and benefit of additional data collection. Ensure staff understand how data collection for their program connects to broader programmatic and organizational goals and outcomes.
  • Triage your desired indicators with your reporting requirements to ensure you are collecting all the data you need, and identify areas of overlap between your performance measurement needs and reporting requirements. Review reporting requirements with development as needed.

Use and share

  • Generate regular reports with relevant indicators for each audience; develop high-level messages and questions for discussion.
  • Share reports and facilitate discussion about indicators during regular meetings with staff, leadership, and board members.
  • Use indicator data to support program planning processes.

Review

  • Revisit your indicators regularly to make sure they are measuring what they are intended to measure. Ensure you have the capacity to measure all your indicators, and cut, change, or refine your indicators as your measurement priorities change.
  • Update what you collect based on your logic models and feedback from staff and leadership. If data are not getting used or are unreliable, consider changes or stop collecting them. In other words, data collection is not set in stone.

 

Target Setting

Once monitoring and evaluation staff have developed indicators, others in the organization can join in setting targets for program success. Such target setting allows organizations to compare goals to reality, whether a target is for quarterly enrollment or annual program graduates. Tiering goals into categories such as “safe,” “stretch,” and “ideal” can provide a target range that includes more attainable and loftier goals. Writing a narrative to explain how targets were developed and listing assumptions will help later when you compare target goals to actual performance.

Lay the groundwork

  • Review logic models and performance indicators and update if needed. Ensure that your indicators account for every element of your logic model.
  • Check whether your organization has any experience setting targets. How were previous targets set? Do you have any current targets established (e.g., in grant applications), or were targets set based on other criteria?
  • Introduce the importance of target setting for continuous improvement cycles with your staff and leadership.
  • Assess when target setting can and should happen in your organization’s annual timeline relative to budgeting and fundraising cycles.
  • Consider the feasibility of achieving targets with an eye to available funding and the effort required each time period to achieve targets.
  • Consider the factors that could affect your results, such as economic and demographic trends; community conditions; legislative or regulatory changes; and new policies, procedures, or data systems.

Develop

  • Set targets for input, output, and outcome indicators.
    • Review current and historical data on your program or population. Assess any bias in historical data (see Racial Equity Approaches section)
    • Review evidence from the field about targets for specific indicators.
    • Estimate future program participation.
    • Estimate results of the program. Program results may initially improve over time, but they may level off, or even decline.
    • Set targets collaboratively with program staff, leadership, and development.
  • Establish an appropriate time frame for each organizational target (e.g., quarterly, semiannual, annual, biannual).
  • Consider using target ranges, especially if evidence is not yet well-established. For example, you could develop safe, stretch, and ideal targets as follows:
    • Safe: We’re 90 percent sure that we can achieve this target.
    • Stretch: If things go well with current resources, this is achievable.
    • Ideal: If we had everything we needed, this is where we could be.
  • Write clear and concise narratives explaining target-setting evidence and assumptions.

Vet

  • Review targets with program staff, leadership, development, clients, the board, and others, and revise as needed.

Use and share

  • Include targets along with indicators in regular reporting processes.
  • Demonstrate who is responsible for various targets. Responsibility will likely include multiple teams and individuals.

Review

  • Work with program staff, clients, and leadership to understand why performance was different from targets and to develop a response strategy that will improve future performance. Use indicators and targets to drive annual program planning processes.
  • Remember that not achieving targets does not equate to “failure.” It’s an opportunity to reexamine assumptions and adjust. When possible, try to document reasons targets were not met.
  • Review prior evidence, assumptions, and performance. Update targets. Recognize changes in program capacity (e.g., additional staff, decreased funding).