urban institute nonprofit social and economic policy research

Evaluation Guidebook for Projects Funded by STOP Formula Grants Under the Violence Against Women Act

Read complete document: PDF


PrintPrint this page
Share:
Share on Facebook Share on Twitter Share on LinkedIn Share on Digg Share on Reddit
| Email this pageE-mail
Document date: December 01, 1997
Released online: December 01, 1997

This project was supported by Grant No. 95-WT-NX-0005 awarded by the National Institute of Justice, Office of Justice Programs, U.S. Department of Justice. Points of view in this document are those of the authors and do not necessarily represent the official position or policies of the U.S. Department of Justice or of other staff members, officers, trustees, advisory groups, or funders of the Urban Institute.

The nonpartisan Urban Institute publishes studies, reports, and books on timely topics worthy of public consideration. The views expressed are those of the authors and should not be attributed to the Urban Institute, its trustees, or its funders.

Note: This report is available in its entirety in the Portable Document Format (PDF).


PREFACE

This Evaluation Guidebook is intended as a resource for all people interested in learning more about the success of programs that try to aid women victims of violence. It has been written especially for projects funded through STOP formula grants, but has wider application to any program addressing the needs of women victimized by sexual assault, domestic violence, or stalking.

The Violence Against Women Act (VAWA), Title IV of the Violent Crime Control and Law Enforcement Act of 1994 (P.L. 103-322), provides for Law Enforcement and Prosecution Grants to states under Chapter 2 of the Safe Streets Act. The grants have been designated the STOP (Services, Training, Officers, Prosecutors) grants by their federal administrator, the Department of Justice's Violence Against Women Grants Office (VAWGO) in the Office of Justice Programs (OJP). Their purpose is "to assist States, Indian tribal governments, and units of local government to develop and strengthen effective law enforcement and prosecution strategies to combat violent crimes against women, and to develop and strengthen victim services in cases involving violent crimes against women."

A major emphasis in VAWA is also placed on collaboration to create system change, and on reaching underserved populations. System change may result from developing or improving collaborative relationships among justice system and private nonprofit victim service agencies. This emphasis on collaboration is key to the long-term ability of the legislation to bring about needed system change. Historically the relationships among these agencies in many states and communities have been distant or contentious, with little perceived common ground. VAWA is structured to bring these parties together, with the hope that they can craft new approaches which ultimately will reduce violence against women and the trauma it produces. Many groups of women have not participated in any services that were available for women victims of violence. To compensate for these service gaps, VAWA encourages states to use STOP funds to address the needs of previously underserved victim populations, including racial, cultural, ethnic, language, and sexual orientation minorities, as well as rural communities. The Act requires assessment of the extent to which such communities of previously underserved women have been reached through the services supported by STOP grants.

To accomplish VAWA goals, state coordinators in all 50 states, the District of Columbia, and five territories distribute STOP formula grant funds to subgrantees, which carry out specific projects. Subgrantees can be victim service agencies, law enforcement or prosecution agencies, or a wide variety of other agencies. They run projects addressing one or more of the STOP program's seven purpose areas:

  • Training for law enforcement and/or prosecution,
  • Special units for law enforcement and/or prosecution,
  • Policy and procedure development for law enforcement and/or prosecution,
  • Data and communication systems,
  • Victim services,
  • Stalking, and
  • Needs of Indian tribes.

This Guidebook is designed to help subgrantees document their accomplishments, and to offer assistance to state STOP coordinators as they try to support the evaluation activities of their subgrantees. Everyone should read the first four chapters, which introduce the reader to issues in doing evaluations and in working with evaluators:

  • Chapter 1 covers reasons to participate in evaluations, and participatory approaches to conducting evaluations.
  • Chapter 2 explains logic models, including why you want to have one and how to develop and use one.
  • Chapter 3 describes how to get what you want from an evaluation, including how to work with evaluators.
  • Chapter 4 discusses way to use evaluation results to improve your program's functioning and performance, promote your program, and avoid being misunderstood.

The only evaluation activity that VAWGO requires for projects receiving STOP funds is that they complete a Subgrant Award Report (SAR) at the beginning of their project and for each add-on of funding, and Subgrant Statistical Summary (SSS) forms covering each calendar year during which the project operates. Everything else in this Guidebook is optional with respect to federal requirements, although state STOP coordinators may impose additional evaluation requirements of their own. Therefore, if you read nothing else, all state STOP coordinators and STOP subgrantees should read Chapter 5, which explains the requirements of the SAR and SSS.

The remaining chapters of this Guidebook form a reference or resource section covering specialized topics. There is no need for everyone to read all of these chapters. Rather, you can go directly to the chapter(s) that provide measures relevant to your particular project. In addition, the Introduction to the Resource Chapters uses logic models to demonstrate how programs of different types might need or want to draw upon the evaluation resources from one or more of the resource chapters:

  • Chapter 6 is a technical discussion of evaluation design, and would be relevant to someone who must actually design an evaluation or who intends to work closely with an evaluator to select an appropriate design.
  • Chapter 7 addresses short- and long-term victim outcomes, and would be relevant to any project that provides direct services to victims.
  • Chapter 8 offers a variety of dimensions that could be used to describe current status and changes in any of the following: one victim service, a "full-service" victim support and advocacy agency, and/or the network of victim services available in a locality.
  • Chapter 9 addresses system changes/outcomes for the criminal and civil justice systems, and would be relevant to projects focused on policies, procedures, protocols, special units, or other types of system change.
  • Chapter 10 describes potential outcomes and measures to assess community-wide system change, and would be relevant to projects attempting community coordination or collaboration activities.
  • Chapter 11 focuses on ways to measure improvements in community attitudes toward violence against women, and would be relevant to any project with a community education component or that wants to assess changed community attitudes.
  • Chapter 12 presents some ways to think about measuring increased perceptions of fairness and justice in the legal and court systems, for those projects interested in seeing whether any changed practices or changed system relationships have made enough difference that victims now see the systems as operating more fairly than they did before.
  • Chapter 13 provides measures for the impacts of training activities, to be used by those projects with a significant training component.
  • Chapter 14 focuses on measures of impact for projects that are developing or installing new data or communication systems.
  • Chapter 15 discusses special issues regarding evaluations of projects on Indian tribal lands.

CHAPTER 1
GETTING STARTED: THINKING ABOUT EVALUATION

This chapter lays the groundwork for thinking about evaluation. Many readers of this Guidebook will never have participated in an evaluation before. Even more will never have been in charge of running an evaluation, making the decisions about what to look at, how to do it, and how to describe your results. Some readers may have had negative experiences with evaluations or evaluators, and be nervous about getting involved again. We have tried to write this Guidebook in a way that makes evaluation clear and helps all readers gain the benefits of evaluations that are appropriate to their programs.

Rigorous evaluation of projects supported with STOP formula grant funds is vitally important. Evaluation can:

  • Document what your project accomplishes,
  • Provide evidence of your project's impact and effectiveness in reaching its goals,
  • Describe what kinds of participants benefit the most (and least) from project activities,
  • Generate information on what strategies work best, how projects should be structured, and how to overcome obstacles, and
  • Document project costs and, in some studies, assess the value of benefits.

You can use this information to:

  • Determine if your project is accomplishing your objectives, for whom, and how,
  • Plan and manage your project by getting feedback to identify areas that are operating according to plan and those that need attention and development,
  • Identify unmet needs and gaps in services for those you are trying to reach,
  • Publicize your accomplishments, and
  • Raise funds for project continuation, expansion or replication.

The kind of information you will get, and what you can do with it, depends on the kind of evaluation you select. You need to start by asking what you hope to learn and how you plan to use the findings. Answer these questions: Who is the evaluation audience? What do they need to know? When do they need to know it?

You can choose from the following types of evaluation:

  • Impact evaluation focuses on questions of causality. Did your project have its intended effects? If so, who was helped and what activities or characteristics of the project created the impact? Did your project have any unintended consequences, positive or negative?
  • Process evaluation answers questions about how the project operates and documents the procedures and activities undertaken in service delivery. Such evaluations help to identify problems faced in delivering services and strategies for overcoming these problems. They can tell you if your project is doing what you want it to do, in the way you want to do it. They can provide guidance for practitioners and service providers interested in replicating or adapting your strategies in their own projects.
  • Performance monitoring provides regular, consistent data on key project activities and accomplishments. Indicators of performance obtained through routine monitoring have several uses. You can use them in a process evaluation to document the activities of component parts of service delivery. You can also use these results in conjunction with project management, to identify areas in which performance expectations are not being attained. Finally, these indicators can be used as part of an impact evaluation to document project accomplishments and help you raise funds for your program.

A comprehensive evaluation will include all of these activities. Sometimes, however, the questions raised, the target audience for findings, or the available resources limit the evaluation focus to one or two of these activities. Any of these evaluations can include estimation of how much the project or project components cost. Impact evaluations can include assessments of how the costs compare to the value of benefits (cost-benefit analysis) or the efficiency with which alternative projects achieve impacts (cost-effectiveness analysis).

WHAT SHOULD BE EVALUATED?

State STOP coordinators may want to develop statewide evaluations that cover many or all of the projects they fund with STOP dollars. Alternatively, state STOP coordinators could decide that the Subgrant Award Reports and Subgrant Statistical Summaries will supply adequate basic information to describe their portfolio of subgrants, and will concentrate special evaluation dollars on special issues. To see longer term program effects on women, they might decide to fund a follow-up study of women served by particular types of agencies. Or, they might decide that they have funded some particularly innovative programs for reaching underserved populations, and they want to know more about how well known and well respected these programs have become within their target communities.

In addition, regardless of the evaluation activities of state STOP coordinators, STOP subgrantees might want to learn more about the effects of their own particular program efforts. They might devote some of their own resources to this effort, and might have a need to work with an evaluator in the process.

This Guidebook is written to help both state STOP coordinators and STOP subgrantees as they develop plans to evaluate STOP projects. It contains information that can be applied statewide, to a subgroup of similar projects throughout the state or one of its regions, or to a single project.

One issue that may arise for anyone trying to use this Guidebook is, Am I trying to isolate and evaluate the effects of only those activities supported by STOP funds? Or, am I trying to understand whether a particular type of activity (e.g., counseling, court advocacy, special police or prosecution units, providing accompaniment to the hospital for rape victims) produces desired outcomes regardless of who pays for how much of the activity?

We strongly advocate taking the latter approach—evaluate the effectiveness of the activity, regardless of who pays for it. Your evaluations of project results for outcomes such as victim well-being, continued fear, children's school outcomes, or the gradual disappearance of sexual assault victims' nightmares and flashbacks need to look at the program as a whole. Unless STOP funds are being used to fund all of an entirely new and separate activity, it is virtually impossible to sort out what STOP funds accomplished versus what other funds accomplished. Instead, we suggest using your evaluation effort to document that the type of activity has good effects. Then you can justify spending any type of money for it, and can say that STOP funds have been used to good purpose when they support this type of activity. Only when your evaluation is focused on simple performance indicators is it reasonable to attribute a portion of the project achievements (e.g., number of victims served, officers trained) to STOP versus other funding sources.

A different issue you may be facing is that your program is already under way, so you cannot create the perfect evaluation design that assumes you had all your plans in place before the program started. We discuss options for handling this situation in Chapter 2.

A final issue for those thinking about evaluation is, Is the program ready for impact evaluation? Not all programs are ready. Does that mean you do nothing? Of course not. Process evaluation is a vital and extremely useful activity for any program to undertake. We discuss this issue in more detail in Chapter 2.

WORKING TOGETHER: PARTICIPATORY EVALUATION

In Chapter 3 we discuss many issues that arise when working with evaluators. However, because we think attitude is so important in evaluation, we include here in Chapter 1 a few comments on participatory evaluation.

Some evaluations are imposed from above. Funders usually require programs to report certain data to justify future funding. Funders may also hire an outside evaluator and require funded programs to cooperate with the activities of this evaluator. In many cases, the programs to be evaluated are not asked what they think is important about their programs, or what they think would be fair and appropriate measures of their own programs' performance.

We think it is important for the entire enterprise of evaluation that those being evaluated have a significant share in the decision-making about what should be evaluated, when, how it should be measured, and how the results should be interpreted. We urge state STOP coordinators and STOP subgrantees to work together to develop appropriate evaluation techniques, measures, and timetables for the types of activities that STOP funds are being used to support.

A participatory approach usually creates converts to evaluation. In contrast, an approach "imposed from above" is more likely than not to frighten those being evaluated, and make them wary of evaluation and less inclined to see how it can help them improve their own program and increase its financial support.

  • Identify goals. The mutual design work needs to start with a discussion of what the goals of the various subgrants are. Some of these goals may be short-term, and some may be long-term. They will certainly be different for different types of projects.
  • Goals should be realistic. Do not give yourselves the goal of "eliminating violence against women," or "our clients will be violence-free in six months," as the first will probably take a few centuries, and experience indicates that even the second is a long process. On the other hand, there is a lot of room for improvement in how justice agencies treat women victims of violence; significant shifts toward better treatment is a realistic and measurable short-term goal. So are "our clients will understand their options for action," or "our clients will leave with a better support system in place."
  • Identify data users and what they need. At the same time, discussion needs to focus on who will use the data collected and the evaluation results, and how to ensure that the data to be collected will be useful to those who must gather it (the program people, usually), as well as to funders. This issue is equal in importance to agreement on goals. A good way to ensure that evaluations will fail is to design them so the data are not useful to the program people.
  • Details. Once goals and uses of the data are agreed upon, state coordinators, subgrantees, and possibly outside evaluators can get down to the details. These include:
  • How the goals should be measured,
  • Who should actually collect the data and how,
  • Who should analyze and interpret the data,
  • Who should review the analysis and interpretation for accuracy and reasonableness, and
  • Who should have access to the data for other analyses that might serve each program differently, and how that access will work.

Including all of the interested parties in designing evaluations helps to avoid the selection of inappropriate measures and promote the selection of measures that both reflect program accomplishments and help guide program practice.

RESPECT FOR WOMEN'S PRIVACY AND SAFETY

Just as program directors and staff want their needs taken into consideration during the design of an evaluation, so too must we think about the needs of the women from whom a good deal of the data for evaluations must come. Everyone who has worked with women victims of violence knows the problems they face with having to repeat their story many times, the concerns they may have about how private information will be used and whether things they say will be held in confidence, and the problems that efforts to contact women may raise for their safety and well-being. No one wants to conduct an evaluation that will make a woman's life harder or potentially jeopardize her safety. We discuss these issues at greater length in Chapter 6. In addition, Chapter 6 also discusses some approaches you can use to establish permission to conduct follow-up interviews, and to keep track of women between follow-up interviews. We think it is important to note here that including in your evaluation design and data collection planning some women who have experienced violence will give you the opportunity to check your plans against what they consider feasible and safe.

HOW TO USE THIS Guidebook

Most people will not want or need to read this whole Guidebook, and there certainly is no need to read it all at one sitting—no one can handle more than 200 pages of evaluation techniques and measures. Chapters 1 through 4 contain the "basics" that everyone will want to read, and that's only about 25 pages. After that, you can pick and choose your chapters depending on your evaluation needs. Some will be relevant to you early in your evaluation work, while others may be more useful somewhat later on.

Chapter 5. Since all state STOP coordinators and all STOP subgrantees must use the SAR and SSS to report activities to VAWGO, each of you should read Chapter 5 when your subgrant begins (or now, if you have already started). It will help you plan how you are going to record the data you will need for these reports.

Chapter 6. This chapter gets into the details of evaluation design, how to choose the right level of evaluation and specific research design for your circumstances, and how to protect women's privacy and safety while conducting evaluations. Everyone doing any type of evaluation should read the sections on choosing the right level of evaluation for your program, and on protecting privacy and safety. The section on choosing a specific research design contains more technical language than most of the rest of this Guidebook, and should be used by those who are going to be getting down to the nitty-gritty of design details either on your own or with an evaluator. You may also want to use that section with a standard evaluation text, several of which are included in the chapter's addendum.

Chapters 7 through 15. Each of these chapters takes a different purpose or goal that could be part of a STOP subgrant, and focuses on the measures that would be appropriate to use in identifying its immediate, short-, and/or long-term impact. Chapters 7 through 12 focus on particular types of outcomes you might want to measure (victim, system, community, attitudes). Chapters 13 and 14 address measurement options for two complex types of projects—training and data system development. Chapter 15 addresses special issues involved in doing evaluations on Indian tribal lands.

If you are doing a training project, read Chapter 13 and, if you want ultimate justice system impacts, also read Chapter 9. If you are doing a victim services project or otherwise hoping to make a difference for victims' lives, read the victim outcomes chapter (Chapter 7) and the victim services chapter (Chapter 8). If your project involves developing and implementing the use of a new data system, read Chapter 14. If you are concerned with changing perceptions of justice within the system as other parts of the system change, read Chapter 12. Use these chapters as best suits your purpose. There is no need to commit them all to memory if they cover something that is not relevant to your project and its goals.

Note: This report is available in its entirety in the Portable Document Format (PDF).



Topics/Tags: | Crime/Justice | Families and Parenting | Race/Ethnicity/Gender


Usage and reprints: Most publications may be downloaded free of charge from the web site and may be used and copies made for research, academic, policy or other non-commercial purposes. Proper attribution is required. Posting UI research papers on other websites is permitted subject to prior approval from the Urban Institute—contact publicaffairs@urban.org.

If you are unable to access or print the PDF document please contact us or call the Publications Office at (202) 261-5687.

Disclaimer: The nonpartisan Urban Institute publishes studies, reports, and books on timely topics worthy of public consideration. The views expressed are those of the authors and should not be attributed to the Urban Institute, its trustees, or its funders. Copyright of the written materials contained within the Urban Institute website is owned or controlled by the Urban Institute.

Email this Page