Data Analysis and Communication

This section reviews best practices for measurement and evaluation staff analyzing data and communicating findings to demonstrate their organization’s impact.

In this section
Data Analysis and Storytelling
Dashboards and Data Visualization
From Performance Measurement to Evaluation


Data Analysis and Storytelling

In a data-driven organization, all staff should be comfortable reviewing and interpreting data. Monitoring and evaluation staff should guide leadership and program staff in engaging with data and facilitate opportunities for them to create their own stories from the data. Clients can also provide their own perspectives by creating stories from an organization’s data. Telling a good data story—especially one that others can understand and relate to—is critical for engaging stakeholders and using data for continuous improvement. Monitoring and evaluation staff should help people develop questions that are of interest to them, map out a process for acquiring and analyzing data to answer those questions, create appropriate stories from the data, and share these stories with others to get their input and reflections.

Lay the groundwork

  • Think about the following planning questions:
    • What data do you have that would be valuable for others to know about and use?
    • Who needs to understand and use these data? What is it that they need to know?
    • What experience does your audience have with data? What are the best ways for them to absorb quantitative information?
  • Think about the biases you may bring to your data analysis.
    • Are you assuming that a certain group will do better or worse?
    • How are you making decisions about how to analyze data?
    • Are you cutting data that will lend itself to a more or a less favorable view of your program participants?

Develop

  • Invite a group of leadership, staff, or clients to review data with you and explore it in different ways. Encourage them to get comfortable with the data and create their own questions to explore.
  • Describe the different types of data stories that people can tell (e.g., interaction, comparison, change, factoid, and personal stories) and discuss how they can help engage people with the data in different ways.
  • Encourage the group to create their own data stories.

Vet

  • Interrogate your findings, and, if possible, have another staff member review your results or even analyze the data and describe their own findings as a check on your process.
  • Consider whether the stories come out clearly in your data.
    • Are these the right data for your story?
    • Are important data missing from this story?
    • Are you using too much or too little data to tell the story?
    • Can someone not familiar with these data understand the story you are trying to tell?
    • Is the story meaningful to the intended audience?
    • Can you make the story better by personalizing it? How can you connect with your audience? Can you use the journey of a single participant to help communicate findings?
    • How long does it take to get to the “point” of your story? Does it take too long, and will the audience lose interest?

Use and share

  • Present data stories to an audience, and get their feedback.
    • Do the stories make sense to them?
    • Which stories are the most resonant with your audience?
    • Does the audience have other story ideas? If appropriate, consider using a data walk (see the Client Feedback Loops and Engagement section) as a presentation method to give your audience more opportunity to interact with and generate their own stories from the data.
  • Document your analysis process and sources of data. Could you or another staff member duplicate the work you’ve done?

Review

  • Reflect on your audience’s reactions to the data. Do you need to change or refine your stories? Do you need to change the data or present it differently to better tell your story?

 

Dashboards and Data Visualization

Charts and other visuals of your organization’s data help both internal and external audiences better understand program outcomes, identify trends over time, and track progress. Dashboards within data systems can offer real-time updates to outputs and outcomes. Monitoring and evaluation staff should design visuals and dashboards that are informed by program logic models, indicators, and targets, but that are also accessible to the intended audiences. Data visuals should be a regular part of staff meetings, presentations, and reporting.

Lay the groundwork

  • Define the purpose of your data dashboard or visuals.
    • Are they providing operational information or data for strategic planning?
    • Will they be used to check progress or report on outcomes or impact?
    • Are they intended to initiate discussion on how programs might be changed or improved or to reprioritize work?
    • How timely are the data you are including?
  • Specify the reporting time frame (e.g., monthly, quarterly, biannually, annually).
  • Identify the audiences (e.g., board, senior team, program directors or managers, program staff, clients).
  • Solicit ideas and feedback from staff on intended use cases for dashboards.

Develop

  • Build dashboard templates collaboratively with organizational stakeholders, and iteratively refine drafts.
  • Build in flexibility so other departments can adapt your templates and so templates can communicate high and low levels of detail.
  • Choose visualization techniques that most clearly and concisely convey the message you would like your audience to understand, such as the following:
    • Use colors and labels strategically and consistently to highlight ideas and match organization branding.
    • Focus on takeaways by highlighting the main story of the data. Use annotations and active titles to provide context and clear messaging.
    • Apply good data visualization strategies to reduce clutter and integrate graphics and text.
    • Explore drill-down or filter functionality to facilitate inquiry and exploration.
  • Brainstorm ways to streamline reporting and tailor reports to relevant audiences while meeting expected deadlines. With staff from programming, leadership, communications, and development, in addition to your board, clients, and stakeholders, refine reports to include the most relevant data and improve design and data visualizations.

Vet

  • Encourage program staff to review draft dashboards, and assess ease of readability and comprehension.
  • Practice presenting visual information, and then assess whether additional caveats or context is needed to understand the data.

Use and share

  • Determine the best way to communicate dashboards to staff, such as email, program meetings, and all-staff meetings.
  • Tailor communication touch points to your audience.
    • Share data with program directors and senior leadership during your periodic meetings, then encourage directors to share with their teams or staff directly during weekly check-ins.
    • Ensure performance reports are always on board meeting agendas. Focus on key measures and high-level trends and priority areas.
    • Celebrate success when you see improvements, and don’t under-celebrate goals that are consistently on target.
  • Ask questions to validate the data, such as the following:
    • Are you experiencing this in your work?
    • How can we respond to “priority areas” or “areas of growth” (i.e., measures for which targets are not being met)?
  • Build the dashboard review process into staff onboarding.

Review

  • Offer yourself (as a monitoring and evaluation staff person) as a resource for teams when they review dashboards. 

 

From Performance Measurement to Evaluation

Service organizations are often bombarded with information about performance management and evaluation, but without a framework for understanding how these activities can inform their work in different and complementary ways. Your organization should be familiar with the performance-measurement-to-evaluation continuum framework to understand how such activities can inform your work. Selecting what you need—whether that be more robust performance measurement processes or a formative evaluation—heavily depends on the questions your organization wants to answer. Evaluation activities also require substantial planning and preparation, so organizational capacity also determines whether your organization is ready to pursue an evaluation. All in all, your organization should pursue goals you have the capacity to meet in the present and set goals and aspirations for future evaluation activities. 

Lay the groundwork

  • Learn about the differences between performance measurement and evaluation.
    • Performance measurement tells you what a program did and how well it did it. Performance measurement is ongoing, responsive, and adaptive; uses program and outcome data; and is mostly led by program staff. Performance measurement helps you answer the following categories of questions:
      • Inputs. What staff and volunteers are involved in the program? What is the budget?
      • Activities and participation. What services are delivered? How well are services being delivered?
      • Outputs. Who is participating? How many people are participating?
      • Outcomes. What changes in knowledge, attitudes, behaviors, or conditions do we observe in participants through our program?
    • Evaluation tells you if and how a program affected the people, families, or communities it is serving—that is, whether a program is producing results. Evaluation is a more discrete activity that answers predetermined questions, often involves other data collection and research methods, and is typically led by a third-party organization.
  • Learn about the different methods of evaluation, their costs and benefits, and the questions they can answer.
    • Formative evaluation is a set of research activities intended to provide information about how a program is being designed or carried out, with the objective of improving implementation and results.
      • Planning study is a type of formative evaluation that takes place during the design or planning phase to help programs clarify plans and make improvements at an early stage.
      • Implementation study is a type of formative evaluation that takes place while a program or initiative is being rolled out or is in progress. An implementation study is designed to answer questions that will help improve service delivery and results.
    • Summative evaluation is a study that assesses a program’s effectiveness in achieving results, based on the program’s logic model or theory of change. Depending on the method used, a summative evaluation may determine the program’s impact on specific outcomes. Summative evaluations can also tell you whether your program is working or if it’s making things worse. Randomized controlled trials are commonly used to conduct summative evaluations. However, other methods (such as matching techniques, difference-in-difference methods, and regression discontinuity designs) might be more appropriate for your population or context.

Develop

  • Continue to strengthen your performance measurement processes. Strong performance measurement practices set you up to do strong organizational learning and evaluation.
    • Root performance measurement activities in your logic model and theory of change.
    • Build performance measurement activities into your program’s or organization’s routine.
    • Involve all relevant staff in performance measurement activities.
  • Consider what types of evaluation would benefit your organization and the questions you want an evaluation to answer. Reflect upon your organization’s readiness for an evaluation.
    • Who is the audience for the evaluation?
    • What types of questions do you want to ask about your program?
    • Who would lead the evaluation?
  • Conduct formative evaluations at key points in a program’s development—such as during the program design phase (i.e., planning study) and at start-up or expansion (i.e., implementation study)—and at periodic intervals to assess how a program is working.
  • Reflect on these additional questions if you consider pursuing a summative evaluation (or impact study).
    • Do you have a formal design or model in place? Is the design or model sound and stable?
    • Are you serving the intended population?
    • Do you have the resources needed to succeed?
    • Are you implementing the program or initiative as designed?
    • Can you produce data for an evaluation?
    • Will an evaluation yield a meaningful result for your program or initiative, or for the field?
    • If the evaluation yields no/neutral (or negative) results, is your organization prepared for the implications?

Vet

  • Include program staff, senior leaders, and board members in the decision to pursue an evaluation.

Use and share

  • Publish reports regularly with performance data for internal and external stakeholders.
  • Publish evaluation reports online.

Review

  • Facilitate opportunities for clients, staff, board members, and funders to reflect on performance data or evaluation results. Use this feedback, along with the results from evaluations, to shape how you design programs and processes in the future.