Data Collection

This section highlights best practices and methods nonprofits can use to collect data and feedback to improve programming.

In this section
Qualitative Data
Surveys
Administrative and Secondary Data
Client Feedback Loops and Engagement


Qualitative Data

While quantitative data can tell you a lot about your programs, they don’t tell you the whole story. Qualitative methods allow you to investigate deeper, more nuanced questions about your staff, clients, programs, and organization. Monitoring and evaluation staff should be just as fluent in qualitative approaches as they are in quantitative methods. These methods include semistructured interviews, focus groups, and observations, as well as mixed qualitative and quantitative approaches.

Lay the groundwork

  • Understand when using qualitative methods is appropriate.
    • To provide context and insight into complexities that other data can’t offer. 
    • To answer “what,” “how,” and “why” questions such as these: How did program implementation vary from the original plan? What do clients like best about the services they receive? Why do families decide to participate, or not participate, in a program? What are the barriers to participation? Which families tend not to participate in a program and why?
    • To compare data, in order to support a study of similarities and differences within and across subjects, helping to answer questions such as these: Do elderly renters report experiences and perceptions similar to or different from other young adult renters? Do elderly and young adult renters report similar or different experiences and perceptions?
  • Investigate what makes good qualitative research.
    • Credibility/authenticity (i.e., internal validity)
    • Transferability/fittingness (i.e., external validity)
    • Dependability/auditability (i.e., reliability)
    • Confirmability (i.e., objectivity)

Develop

  • Write clear research questions that can be answered with qualitative data and whose answers will provide valuable insight for serving clients.
  • Decide what method of data collection works best for your team and your clients, and consider the pros and cons (e.g., time and staff needed, access to participants).
    • Interviews use open, semistructured, or structured guides that focus on those experiences and perspectives of most interest given your purpose; guides are tailored to types of respondents (e.g., adult clients, youth participants).
    • Focus groups use semistructured or structured discussion guides organized around a small set of topics or themes; groups range in size and could include 10 to 15 people of similar background.
    • Observations describe a setting, context, process, or behaviors of interest to a study; use a guide to focus the observations.
    • Other methods can include photovoice (photos or videos to gather input from study subjects) and content analysis of materials such as reports, outreach materials, and brochures.
  • Work in teams, take thorough notes, and consider recording interviews, focus groups, or observations as possible.
  • Think about how to sample clients, including how many, which populations, and how they will be selected.
    • Purposive and quota nonprobability sampling is selected based on population characteristics and study objectives. It is also known as judgmental, selective, or subjective sampling. If you run multiple programs, make sure you get a mix of clients across programs or services received.
    • Snowball sampling encourages research participants to recruit other participants. It is used when potential participants are hard to find.
    • Inclusion sampling ensures that multiple voices are included in the sample.
  • Consider offering incentives to improve participation and express that you value clients’ time and effort.
    • If you offer study participants an incentive, it should be of a kind or an amount to compensate for time but not to coerce participation.
    • Providing a meal or snacks is another way to recognize participants.
    • Offering child care may increase participation among people with child care responsibilities.

Vet

  • Review your data collection protocols and analysis plan.
    • Will respondents understand your data collection questions? 
    • In your data collection protocols, do you refer to clients in ways they’re familiar with or like? For example, do you call them “clients” or “customers”?
    • Could your data collection process or methods harm any participants? Consider the benefits of having an external entity (such as an institutional review board) review your proposed methods and make recommendations to eliminate or minimize the possibility of harm.
    • Are you bringing bias to your facilitation or data analysis? Are you assuming that a certain group will have certain reflections or experiences? How are you making decisions about how to analyze data? Are you using quotes that will give a more or a less favorable view of your program participants or that will reinforce stereotypes? 
    • Have you developed a plan for analyzing your data? Consider your research questions and how you will use the data you collect to answer those questions.

Use and share

  • Implement your data collection plan.
  • Analyze your data.
    • After coding for themes in transcripts ask another staff member who attended or took notes to code a small subset of the transcripts to assess if you both found similar themes.

Review

  • Share findings with participants and ask for feedback to ensure your interpretations hold up under some scrutiny.

 

Surveys

Surveys are a powerful method for collecting client and program data beyond what is gathered routinely but can also be difficult and expensive to implement well. When creating a survey, monitoring and evaluation staff should begin by defining what the organization needs to know and who the target respondents should be. Monitoring and evaluation staff should workshop survey questions with program staff and have a survey design expert review questionnaires to make sure the data collected will provide accurate and relevant information. Before your organization launches the survey, have program staff fill out sample surveys and conduct a pretest with target respondents to catch any lingering issues. Train staff on proper survey administration and closely monitor the early survey completions. Also consider applying survey design principles to your intake forms and other client-facing data collection tools.

Lay the groundwork

  • Understand the different data collection methods (surveys, semistructured interviews, in-depth cognitive interviews, focus groups, program or administrative data, and secondary data) and their appropriate uses.
  • Review logic models, indicators, and your data inventory to identify information gaps that a survey could address.
  • Review how to integrate racial equity approaches in data collection tools.

Develop

  • Define what you want to know. Write out specific questions that survey data should be able to answer. Ensure the information you seek is reflected in (and justified by) your logic model and indicators framework.
  • Identify who needs to respond to the survey—that is, your target population. Consider these questions:
    • How will you recruit survey respondents?
    • Are the members of this population likely to respond to a survey?
    • How will you follow up with nonrespondents or incentivize them to participate?
    • What is the minimum number of surveys you need to collect?
  • Consider the trade-offs of different survey methods (e.g., telephone, mail-in, face-to-face, online text/mobile device), and select the best one for your target population.
  • Write the survey, and then workshop your survey questions. Do similar, validated survey instruments already exist?
  • Consider the trade-offs with question design. Will this type of question collect the information you’re after? Are you looking to collect quantitative or qualitative data? Do you want open-ended or closed response options?
  • Decide whether to offer respondents incentives, the amount, the structure (e.g., prize or guaranteed), and the method of delivery.

Vet

  • Consult with program staff, grant managers, executives, and development staff as necessary.
  • Consider a few things before you decide to move forward with a survey:
    • How might you write questions on your intake forms, surveys, and other data collection tools to prevent reinforcement of negative perceptions of clients?
    • How can you reword questions to be more respectful of clients?
    • Are you asking clients to divulge sensitive information? Do you really need that information? Or will it ultimately go unused?
    • Does the literacy or numeracy level of your assessment need to be adjusted to increase accessibility?
  • Review your questions and answer categories to ensure they don’t commit the following common survey-design errors:
    • double-barreled questions (asking about more than one thing)
    • categories that are not mutually exclusive or exhaustive
    • unclear time frames for recall questions
    • wording subject to different interpretations
    • unreasonable intervals
    • use of jargon or potentially offensive language
    • questions that ask about beliefs and behaviors interchangeably
  • Test your survey
    • Have an expert review your survey.
    • Test the survey with nonexperts and members of your target population. Replicate the survey procedure in addition to the questions. Conduct cognitive interviews on question navigation, response options, and navigation problems. Document how long it takes different groups to take the survey.
    • Test again if the last test required multiple changes.

Use and share

  • Translate your survey into appropriate languages, and confirm that the language choice is consistent with the reading levels of the target population. After translation, test the survey again!
  • Train staff as necessary to support survey administration.
  • Field your survey with clients according to the method you chose.
  • Track and share the progress of data collection.
  • Share survey results, if appropriate, with the target population.

Review

  • Review survey results for nonresponse, missing data, and the time it takes to complete the survey.
  • Discuss with staff what can be done to improve survey administration.
  • Discuss with survey respondents how the survey can be improved.

 

Administrative and Secondary Data

Most nonprofits collect information about the people their programs serve. Yet, knowing more about their clients—such as their current health, education, and employment circumstances; the other services they may interact with; and the places they live—can help nonprofits see how their services fit into a larger community context and what other challenges their clients may be facing. Both national and local sources of data can shed light on these areas. Monitoring and evaluation staff should be familiar with sources of secondary and administrative data and think about how they and others in the organization can use these data to understand how they can better support clients.

Lay the groundwork

  • Reflect upon the populations and communities you serve, and write questions to help inform or contextualize your work. How will the answers to these questions improve your services or change your approach?
  • Learn about national and local sources of administrative and secondary data relevant to your work with clients (e.g., Urban Institute’s Greater DC).
  • Look for examples of how these data sources or ones like them have been used by others.
  • Connect with development or advocacy staff to understand how they have described and researched community trends and conditions (e.g., in grant applications) and what questions they have not been able to answer.

Develop

  • Write clear research questions whose answers will provide valuable insights for serving clients and that can be answered with existing administrative and secondary data.
  • Search for existing research or analyses around your research question (e.g., local or federal government websites, local data intermediaries, research organizations).
  • Identify data sources necessary to answer your questions and the steps needed to access those data.
  • Decide whether to move forward with the research project. Will the answers to your questions help provide context or inform your work? Does your organization have the capacity and skills to take on this research?
  • Obtain data that are publicly available through open data portals or public sources. Consider recency, quality, and geographic area of interest (e.g., neighborhood, ward, county, state). Consider how you will link the administrative data to your program or population of interest.
  • Use the following tips to develop a compelling pitch for third-party data providers whose data are not publicly available:
    • Discover how granting your request could help the data provider fulfill its mission or solve a problem.
    • Understand the data provider’s history, priorities, and worries. Does it have reasons to say no that you should be aware of?
    • Ask colleagues or others to help you prepare for face-to-face discussions and sharpen your pitch.
    • Know your own strengths and weaknesses, and those of your organization.
    • Don’t make promises you can’t keep. Don’t misrepresent your goals or intentions.
    • Be prepared for negotiations to take time and to fail at first. Don’t become emotional or combative.
    • Consider partnering with other organizations on a collective request.
    • If the provider is unable or unwilling to share a full dataset, ask whether it can share a partial dataset (e.g., de-identified or aggregate data).
    • Prepare a data-sharing agreement as needed (see the Data Privacy section).
  • Understand your data. Review descriptive statistics tables (e.g., sums, high/low values, averages, medians, frequency counts) from your data and look for unexpected or extreme values. For example, how do your data sources compare with other data for similar issues or populations?
  • Analyze data and formulate answers to your initial research questions.

Vet

  • Share what you’ve learned with your colleagues to validate the utility of your data and analyses and to generate new ways to look at the information (e.g., to what extent data can be leveraged to support program design).
  • Share what you’ve learned with external experts for technical review.

Use and share

  • Share your findings as appropriate with stakeholders inside and outside of your organization.
  • Share what you learned from your experience working with administrative and secondary data both inside and outside your organization (as appropriate).

Review

  • Consider periodically reviewing and revising your analysis as new data become available.

 

Client Feedback Loops and Engagement

Funders and nonprofits are coming to a consensus that their client communities should be partners in designing their own service solutions. Client feedback loops and community-engaged methods include the input, participation, and reflections of the people and communities you serve. They provide organizations a clear pathway to hear client experiences. Client feedback loops and community-engaged methods look different at each organization, but every loop should involve four steps: (1) listening to clients, (2) reflecting on feedback data with clients, (3) acting upon that feedback by making changes, and (4) communicating to clients why those changes were or were not made. Client feedback loops can take many different forms, including feedback surveys, data walks, and client advisory boards. Client engagement in monitoring and evaluation can expand learning culture; redistribute power between clients and providers; and lead to action, change, and improvement.

Lay the groundwork

  • Before you start the process, define the scope: Why do you want to create the feedback loop?
    • Define which clients participate: Will all clients be a part of the feedback or just clients from one particular program?
    • Engagement: How are clients involved in your data processes?
    • Methods: How does your organization involve clients?
  • Before initiating a client feedback loop, your organization should assess its capacity to support and sustain a client feedback loop, from staff readiness to the quality of existing relationships with clients. Are the relationships healthy? Do clients typically feel safe providing feedback?
  • Assess current feedback loops. What types of feedback do you currently collect from clients? How is this feedback used (if at all) to improve service delivery? What crucial feedback is missing? What are the relationships between clients and staff like? What barriers exist in client and staff relationships?
  • Recognize that staff priorities can be different from client priorities on areas of improvement. Do you feel like you understand client priorities, or do you need to collect additional data?
  • Focus on a specific part of your program or organization for which you have the time, capacity, or desire to make meaningful change based on client or staff feedback. Choose priorities that
    • can make a difference in clients’ lives;
    • are actionable;
    • cannot be assessed in other, less burdensome ways; and
    • are oriented toward change.

Develop

  • Define priorities for feedback collection and ensure these align with client priorities. 
    • Connect priorities to outcomes defined in your logic model and ensure your indicators framework includes feedback loops as a data collection method.
  • Examine the context.
    • Consider existing infrastructure before attempting to build new feedback loops.
    • Acknowledge and value community expertise.
  • Select feasible feedback collection methods.
    • Refer to the Surveys section for best practices. Surveys are a powerful tool for collecting individual client feedback.
    • Refer to the Qualitative Data section for best practices. For example, focus groups, semistructured interviews during which clients can provide feedback on programs or specific prompts.
    • Consider conducting data walks to allow clients and service providers to jointly review data in small groups, interpret what the data mean, and collaborate to improve policies, programs, and other factors of community change. Data walks can be a powerful tool for nonprofits who want to communicate data to clients and collect feedback in structured conversation.
    • Consider forming a client advisory board. Client advisory boards are a collective group of community members and organization representatives who share information and make decisions to improve services. Client advisory boards are most likely not the first step in engaging clients—they take time, infrastructure, community trust building, and institutional buy-in to set up.
  • Set expectations
    • Prepare community members for the task at hand.
    • Make expectations for work and partnership clear.
    • Clearly define decisionmaking capacity.
    • Compensate community members for their time and expertise.
  • Collect feedback from clients mindfully.
    • Create a safe space for clients to offer sensitive feedback.
    • Think about who is best to facilitate. Does that person directly work on the priority area of focus? Would clients feel comfortable discussing sensitive issues with them? For example, clients or staff may be more open talking to someone not immediately overseeing their work or program performance.
    • Set the tone—voluntary, inclusive, and without repercussions.

Vet

  • Review feedback plans with staff and leadership.
  • Test feedback collection methods with clients.
  • Assess results and make changes if needed.

Use and share

  • Communicate feedback to staff.
    • Create a safe space for staff to receive sensitive feedback.
    • Think about the time, place, format, and frequency of feedback that will be most effective for staff and most conducive to eliciting positive responses.
    • Set the tone—feedback is about improving the quality of service and results for clients, not about punishing staff.
  • Incorporate feedback into decisionmaking about your program or organization.
  • Communicate, publicize, and share information about changes to your program or organization. Determine how you want to communicate changes to clients, and provide clients with opportunities to be part of implementing changes.
    • Consider hosting a data walk at the end of each feedback loop. Data walks are an easy and effective way to report feedback data and program changes to clients and solicit their reflections on that data.

Review

  • Continue to build relationships with your clients, even when you’re not formally collecting data with them. Feedback loops are about building relationships, not collecting data.
  • Understand that creating client feedback loops and community engagement are iterative, not one and done. But don’t ask for more feedback until you’ve had a chance to act on initial feedback and close the loop.
  • Expect that the first changes you’ll see will be to your organizational culture and your relationships:
    • Include everyone–staff, volunteers, and clients—in the process.
    • Embed client feedback loops and community engagement within your organizational strategy.
  • Watch for changes in client feedback and outcomes to follow.