Home to the Urban InstituteA Guide to Implementation Research

News Release

Chapter One


About the


Order This


Introduction to Implementation Research

Public policymakers and program managers are responsible for effectively and efficiently using community resources to promote social goals. Evaluation research provides information to support the decisions they make. This book explores how one type of evaluation research—implementation research—can assist those designing and operating social programs.

What Is Implementation Research?

In this book, "implementation research" is used as a general term for research that focuses on the question "What is happening?" in the design, implementation, administration, operation, services, and outcomes of social programs.1 In the field of evaluation research, implementation studies are sometimes contrasted with impact studies, which measure the difference between "what is happening" and "what would have happened" in the program's absence. But although implementation studies do not estimate the impacts programs have on clients and other stakeholders, they do more than simply describe program experiences—implementation studies also assess and explain. That is, they not only ask, "What is happening?" but also "Is it what is expected or desired?" and "Why is it happening as it is?"

Comprehensive evaluations of social programs often include both implementation and impact studies. Within this research context, implementation studies can have multiple purposes, such as supporting the impact study by describing the precise nature of the program being tested and explaining the pattern of impact findings over time or across program sites.

Evaluations that include both impact and implementation studies are usually undertaken when policymakers, program administrators, and policy experts are uncertain about whether a given program or policy innovation will work. Often, however, new programs or policies are implemented on the basis of executive or legislative mandates, which may reflect some combination of changing public attitudes or values and knowledge already established through prior practice and research. These mandates oblige federal, state, and local agency executives and program managers to implement new programs or to make changes in existing programs. Particularly when the mandated changes are extensive and/or lead to the creation of new programs, the biggest concerns may be to get the programs "up and running" and working well. In these instances in particular, implementation research separate from an impact study may be warranted and desirable.

The core mission of implementation research to describe, assess, and explain "what is happening and why" may be especially compelling when brought to bear on the following major issues of program design, resources, administration, services, and outcomes:

What are the program goals, concept, and design? Are they based on sound theory and practice, and, if not, in what respects?

Social programs mandated by Congress, but implemented at the state or local level, are often launched with a block grant to states or communities, open-ended funding for an entitlement, or guaranteed federal matching funds, and are accompanied by rules about the use of those funds (such as eligibility, benefits, services) and a set of specific and general social goals. Within this framework, details of program design are often left to state and local officials. Sometimes implementation research may be concerned with fundamental questions of the soundness of program concept and design. For example, do the proposed services match the needs of the target population and are they likely to contribute to the program's goals? Are the planned administration and information systems feasible and suited to the desired "client flow" and management information needs? In general, questions about theories behind program design help establish that the connections among program activities, systems, services, and client outcomes have some basis in research and experience, and that those activities, systems, and services are feasible given current technology and techniques.

Does the responsible agency (or agencies) have the resources and capacity available and in place to implement the program as planned, and if not, what is needed?

After considering a program's theoretical and practical feasibility, a logical next step is to assess the responsible agency's resources and capacity to implement the program and sustain its operation at a desired level. Implementation research may be designed to include the following questions about a program's resources and capacity: Has the program model been translated into concrete resource requirements? Is program funding sufficient to meet those requirements? Are the right staffing numbers and skills available among the agency's workforce or within the community? Can facilities handle client processing in the numbers and at the rate required? Are the necessary services available in adequate supply throughout the system? Can the current administrative and information systems accommodate the new program, and, if not, what more is required?

New programs rarely have the luxury of satisfying all capacity requirements before getting started. Agency managers usually expect to have to build up to capacity during the first months or years of a new program's life. Also, questions of program resources and capacity may arise at any time in the life of a program, and may be an important reason why program goals are not achieved. For example, as new policies add worker activities to ongoing responsibilities, capacity needs may outgrow available resources and some important tasks may not be accomplished. Even implementation studies of mature programs may thus face resource and capacity questions.

For example, even if issues of program capacity are settled enough for the program to operate at an acceptable level, a basic implementation challenge encompasses having the required facilities, administrative structures, information systems, and services in the right place at the right time. Offices may need to be rearranged to accommodate new procedures or to include space for mandatory meetings with clients. Similarly, new case management responsibilities may require a personal computer on every caseworker's desk. Moreover, administrative systems may require additional programming or new subsystems to meet the program's information needs. Finally, services envisioned in the program design must be arranged in advance to accommodate the expected streams of referred or assigned clients. An important part of most implementation studies is to describe and assess the degree to which the various components of a program appear to be ready and available to agency managers, workers, and clients, and, if not, to diagnose the reasons why.

Is the program suited to its environment?

Social programs do not exist in a vacuum. To be successful, they may require a receptive and well-informed client and advocacy community, as well as favorable social, political, and economic conditions. Program design should take the program's context into account, and may have to adjust to local differences in the environment by allowing for some amount of discretion or program variation across different communities. Implementation research often describes a program's environment and assesses the relationship between this environment and the program's operations and outcomes. For example, an implementation study may ask if a particular training program is preparing clients for jobs in accessible locations or if the program services and client behavioral requirements are suited to the beliefs and practices of major local ethnic groups.

Are program processes and systems operating as planned, and, if not, how and why?

Although a program may have adequate resources and may appear to have all of its components in place, it still may not operate as planned.2 For example, if workers do not implement new policies and procedures, if new administrative systems do not work correctly, or if new communications links with service providers are not used, it is unlikely that the program will achieve its goals. Sometimes challenges related to changing institutional cultures are as important as these operational and structural issues. For example, are workers internalizing new goals and policies and effectively communicating them to clients? Are quality assurance and performance monitoring systems in place to reinforce the program's change in direction? Typically, implementation researchers will observe program operations, measure the degree to which they are operating as planned, diagnose problems, and recommend solutions.

Is the program reaching the intended target population with the appropriate services, at the planned rate and "dosage," and, if not, how and why?

Although the quality and quantity of a program's services may be thought of as part of its operations, it is sometimes convenient to focus on services separately. This is particularly true if service delivery is decentralized and/or other agencies or businesses provide the services. If critical program services are not provided in sufficient numbers in a reasonable time, or if services are inappropriate or of low quality, overall program goals may not be met. To address these issues, implementation studies often observe, measure, and assess service provision, as well as diagnose observed problems and recommend changes.

Are clients achieving desired outcomes, and, if not, how and why?3

Although client outcomes are the result of all the program aspects discussed above, outcomes are often the first signs that a program may or may not be working well. Sometimes, dissatisfaction with results may also prompt program managers, state executives, or the state legislature to undertake an implementation study in the first place.4 In uncovering the reasons why a program is unsuccessful in achieving results, implementation research may call into question any or all of the parts of a program, including, for example, the program's design and theoretical underpinnings, relationship to its environment, administrative and management structure, resources, policies and procedures, and services.

What Research Methods Are Included in Implementation Studies?

While some researchers think of implementation research as a specific methodology, this book defines implementation research as the set of specific issues that fall within the core questions, "What is happening?" "Is it what is expected or desired?" and "Why is it happening as it is?" When characterized by the pursuit of these questions, implementation research is eclectic and pragmatic in its methodologies—the data, data collection strategies, analyses, and presentational styles required are determined by a combination of its specific research questions and an educated guess by the researchers about where and how to look for the answers.5 The following section introduces and summarizes the variety of data and analyses that may be included in implementation research; later chapters describe and illustrate them in greater detail.

Data and Data Collection

Because implementation studies may address almost any phenomena related to the way a social program operates, they can encompass a wide variety of data and data collection strategies. This section introduces the types of data usually collected for implementation research.

Many of the primary data needed for implementation research are gathered firsthand in the "field" or where program activities happen and client outcomes are realized. A large portion of needed data are firsthand accounts of program processes, experiences, opinions, and results by the key stakeholders, including program planners and developers, state agency managers, local office management and staff, service provider management and staff, state and local advocacy and public interest groups, and clients. Researchers use a variety of methods to gather information on site from these respondents, including the following examples:

  • Open-ended interviews—These are semistructured conversations that focus on those parts of the program or program experiences most relevant to the informant being interviewed.
  • Focus groups—Focus groups are open-ended discussion groups, usually of no more than 10 to 15 people of similar backgrounds, organized around some small set of topics or themes.
  • Participant observation—Participant observation is a specialized tool of ethnographic research that places the researcher for extended periods of time in a program's social milieu in an attempt to understand the program from the clients' viewpoint.

Some implementation questions require information on relatively large numbers of people, such as questions that ask about the "average" client or the "average" caseworker, or those that ask about how some client groups or local areas are different from other groups or communities. Generally, there are two types of data that include individual information ("microdata") about large numbers of people: administrative data and survey data. Administrative data are routinely collected as part of program operations, and may include both electronic and hard-copy case files. Automated administrative data are usually quantitative or categorical (such as gender or ethnicity), but hard-copy administrative data may also include case note narratives and other qualitative information.

Surveys involve collecting data from large numbers of respondents. The usual modes for surveys are mail, telephone, and in-person interviews; sometimes surveys use a combination of these approaches. Researchers design survey questionnaires so that collected information or responses may be easily converted into quantitative measures, but also may include open-ended or qualitative responses.

Administrative data are also often available in statistical reports that may be important sources of information about program operations, activities, and outcomes. The federal government or state legislatures sometimes require state agencies to issue statistical reports that monitor program performance and program spending. Other statistical reports may be useful in understanding a program's environment, such as U.S. Census or state vital statistics reports. Other useful statistical reports are compiled by business groups or by other public and special interest groups. Many public agencies have developed web sites that often have statistical summaries of agency activities and accomplishments.

Agency documents related to program planning, design, start-up, administration, and public information are a final source of data for implementation research is. These documents may include relevant federal and state legislation and administrative rules, mandated federal and state program plans and documents supporting the plans, implementation plans, worker manuals and handbooks, program forms, press kits, and newspaper articles.

Documenting, Assessing, and Explaining Implementation

Implementation research has a variety of analytic strategies at its disposal, depending on the data used and the goal of the analysis—documenting ("What is happening?"), assessing ("Is it what we want or expect to happen?"), and explaining ("Why is it happening as it is?") program implementation. Why are assessment and explanation treated separately in this book? After all, within the context of an impact study, assessment and explanation are nearly the same. That is, impact studies are designed to assess the degree to which a program or policy is causing desired change. In social science, demonstrating that b is caused by a usually means that b is explained by a.

In implementation research, however, "assessment" and "explanation" mean different things. "Assessment" means judging whether or not the program or policy under study is operating according to some model, norm, or standard. "Explanation" means generating hypotheses about why the policy or program is operating as it is and achieving its results. Implementation research is usually not expected to test those causal hypotheses.6

The following is an overview of the types of analyses used in implementation research; chapters 3 and 4 present more detailed accounts of how these strategies may be used to address specific research questions.

Documenting Implementation

Documenting how a program operates is at the core of implementation research. Certainly, it is difficult to understand whether a program is operating as planned and is achieving its intended goals, or why a program is operating as it is, without a clear picture of what is happening in the program. There are two fundamental perspectives implementation research may adopt when describing a social program. First, and most often, the story is told from the point of view of the objective researcher. Although the objective description may include accounts of stakeholder opinions and attitudes, it presents them as important "facts" about the program rather than as alternative or competing descriptions of the program. Sometimes, however, it is also important to describe the program from the point of view of the program's stakeholders. In trying to understand why a program operates as it does, this insider's view can often clarify the program actors' motivations, knowledge, and attitudes, thus helping explain their interactions with the program and providing insight into "what is really happening" in program administration.

Researchers designing and conducting a study to document implementation face several major challenges:

  • Developing an initial idea of what to observe, whom to interview, and what to ask;
  • Sorting through conflicting or even contradictory descriptions and assessments of a program;
  • Dealing with variations over time and across program sites; and
  • Combining quantitative and qualitative data in cogent and useful ways.

Assessing Implementation

Implementation studies provide assessments of program designs, operations, and outcomes. In general, implementation assessments consist of comparing data on program operations, activities, services, and outcomes with some norm or standard and developing an evaluation based on that comparison. In conducting assessments, researchers may appeal to numerous norms or standards, depending on the activity, structure, or outcome evaluated, including the following examples:

  • The initial program model, plan, or design;
  • Federal and state legislative or administrative rules and performance standards;
  • Professional or industry standards;
  • Implementation researchers' professional experience and judgment; and
  • Opinions and judgments of the program's stakeholders.

Explaining Implementation

Through explanatory analysis, implementation research seeks to understand why a program operates and performs as it does, as well as develop recommendations for change or lessons for effective program design and administration. The issue of whether implementation research alone can establish causal links between program policies, program activities, client outcomes, and broader social change is an ongoing controversy. This book does not engage in the debate, but takes the position that while well-designed implementation research can uncover plausible reasons why a program is working or not—can build hypotheses and theories—it should not be expected to demonstrate causality conclusively (see endnotes 3 and 6).

Most explanatory analysis in implementation research uses one or more of three general approaches in developing hypotheses about causal connections: (1) using stakeholder accounts of why they take, or fail to take, specific actions related to program activities and goals; (2) associating natural or planned variations in management, policies, operations, services, or other factors with observed differences in results; and (3) monitoring results against predictions or expectations based on the program's model or theory.

Advantages and Limitations of Implementation Research

As with any research approach, implementation research has both advantages and limitations when compared with alternative methodologies. As mentioned above, implementation research conducted in conjunction with an impact evaluation provides detailed information "inside the black box"—information about how programs operate and how impacts happen. However, stand-alone implementation studies also have important advantages and limitations as compared with impact studies. Among the advantages of implementation research are the following:

  • Provides rapid feedback to program managers—When necessary, implementation studies can be designed and fielded quickly. Because the study of "what is happening" and program operations can occur concurrently, implementation research can feed timely information back to managers and policymakers. In contrast, impact studies often require several years of follow-up to track changes caused by the program.
  • Provides information during a program's formative period—Sometimes the need for information about operations and results is greatest during the program's formative period. Impact studies are most useful when they evaluate mature, stable programs, but implementation studies may be mounted at any time. In fact, stand-alone implementation studies are often used to monitor and fine-tune operations during a program's start-up phase. A related advantage of implementation studies is that they can take place during periods of rapid and widespread program and contextual change and in an environment of uncertainty about how and when the program will stabilize and what its final structure will be.
  • Provides rich contextual and ethnographic information—Implementation studies can enhance policymakers' and program managers' knowledge of the various environments in which programs operate and their knowledge of how workers and clients experience the program. This information may be important for programs whose success depends in part on changing the "culture" of the program, worker-client communication and interaction, and stakeholders' attitudes about the program and its goals. Moreover, by collecting data from a variety of stakeholders, implementation research typically includes multiple perspectives on key program and policy issues.
  • Provides information about the program "as it really is"—One prevailing type of impact study—experimental design—depends on randomly assigning people potentially affected by the program into two groups: one that receives the program intervention and one that does not. Some researchers have noted that imposing random assignment on an ongoing program can alter operations and introduce unwanted "noise" into the evaluation. However, implementation studies alter the normal course of operations very little, if at all.
  • Provides limited and strategic information where and when necessary—A final important advantage of implementation research over longer-term impact studies is that implementation studies can be targeted and strategic. That is, researchers can design implementation studies efficiently to look only at one aspect of operations or to focus on a particular locality or program office. In contrast, impact studies usually require a sizable investment of resources, as large samples of program clients often need to be followed up over a long period of time.

Among the important limitations of implementation research are the following:

  • Does not provide direct and accurate estimates of program impacts or cost-effectiveness—Because implementation studies are not designed to estimate impacts, they will not provide accurate estimates of program impacts or, by extension, estimates of program cost-effectiveness (cost per unit of "impact") or cost-benefit ratios. Implementation studies are not substitutes for well-designed impact studies.
  • Makes some judgments on the basis of qualitative and/or subjective data—As suggested above, some of the data collected for implementation research are quantitative in nature. Analyses using those data are subject to the same standards of statistical rigor as quantitative data used for impact evaluations. Some judgments needed for implementation evaluations, however, must be made on the basis of qualitative or subjective data. For example, assessments about changes in the "culture of a bureaucracy" may be based on a combination of observer judgment and respondent opinion, with few "hard" statistical data to back the assessment up. Some researchers consider the reliance on qualitative and subjective data to be a drawback to implementation research. Nevertheless, those judgments are often of value to program managers and can be a critical dimension to program evaluation when considered with other indicators of operations and results.

How This Book Is Organized

Implementation research may be applied to evaluation research across the spectrum of social policies and programs. The purpose of this book is to provide practical guidance to consumers and practitioners in applying implementation research to evaluation studies in all types of social policies and programs. Because this book grew out of efforts to develop standards for implementation research for welfare reform programs, and because of my background and experience in welfare policy and programs, most of the examples in this book are taken from that policy domain.

Using the general framework introduced above, the next section of this chapter provides an overview of how to organize a comprehensive implementation study of a welfare reform program through specific research questions. The section also gives an introduction to the welfare reform movement of the 1990s so that the reader can understand the issues involved in designing and conducting implementation research for welfare reform.

This book is not intended to be exhaustive, but to suggest the range of research questions and techniques available to implementation studies and the results that may be expected. Moreover, it is not intended to argue for any one point of view or methodology, but to be representative of the range of approaches that have been effectively used. Toward that end, the book includes examples of implementation research of welfare reform programs by research consulting firms, academic practitioners, and government agencies.

The Challenges of Welfare Reform

The 1980s and 1990s witnessed growing popular demand for changes in American welfare policy. Trends in public opinion were paralleled by the accelerating pace of experimentation in welfare policies and programs, particularly at the state and local level. Growing confidence in more dynamic alternatives to need-based cash assistance programs that had been in force since the 1930s combined with political will in the passage of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996. There are three core elements of PRWORA:

  • It converted open-ended federal support for cash assistance (Aid to Families with Dependent Children [AFDC]) into fixed block grants to states (Temporary Assistance for Needy Families [TANF]).
  • It gave states unprecedented discretion in designing welfare policies, programs, and services; in spending block grant monies on related services for low-income families; and in transferring authority and responsibility for these decisions to more local political entities.
  • It limited the use of federal TANF funds to families that have been receiving cash assistance for less than 60 months and imposed other fiscal penalties on states that do not meet various performance standards related to clients' participation in work, work preparation, and other key behaviors.

The implications of these three core elements for the design, operation, and assessment of welfare programs and related programs and services are enormous. Because states are now fiscally responsible for any monies spent on TANF benefits or services beyond the amount of the TANF block grant, as well as for individuals who have received TANF funds or services for more than 60 months in a lifetime, there is a strong incentive to limit welfare use and to move families quickly to economic independence. Those incentives for states to encourage work as an alternative to welfare are strengthened by the threat of fiscal penalties for failure to meet federal standards for the proportion of clients who work or prepare for work.

With the deep-seated changes in policy, goals, expectations, and philosophy coincident with recent welfare reform, welfare programs are undergoing a "paradigm shift" in their basic design.7 That is, rather than a series of small and tightly controlled innovations made to a stable core policy and administrative structure, recent welfare reform looks more like the creation of a new program, with new goals, policies, services, administrative systems, and bureaucratic cultures emerging from state and federal mandates. In an environment in which so many aspects of welfare programs and services are shifting at once, it is likely premature to implement impact evaluations for at least two important reasons. First, because so many factors that may affect program outcomes are evolving at the same time, it would be difficult to isolate the impact of one or several policies or services, even with a classical experimental design. Second, it is difficult to predict in advance exactly what the "new system" will look like and how it will operate. This is exactly the situation in which a stand-alone implementation study may be called upon to address the questions "What is happening?" "Is it what we expected?" and "Why is it happening as it is?"

Designing an Implementation Study of a Welfare Reform Program: Research Questions

Unlike the textbook approach to scientific inquiry—framing, testing, and reframing the hypothesis—implementation research does not always begin with a hypothesis (although it may lead to hypotheses about how and why things work). Nevertheless, any coherent research project must be guided by some plan; investigators cannot simply "go out and look at the program." At the very least, researchers must know something about the program goals and design. Some idea of the design, prescribed policies and activities, and desired outcomes may then be used to guide the development of detailed questions about the program environment, operations, and results. The questions suggest what to look for (data), where and how to look (data collection), and how to use the information strategically (analysis plans).

The opening of this chapter showed how the questions that drive the organization of an implementation study may be related to the logic of the program model. In addition, the ways in which the program is structured and operated may help bring about a set of specific and general social goals. As mentioned previously, an implementation study need not address all aspects of a program to be useful. For example, an implementation study may focus on the delivery of services to clients, the organization and use of administrative information, or on clients' responses to program requirements and opportunities. To give some idea of the breadth of research activities that may go on in an implementation study, this book addresses the logical and practical steps in the design, development, and implementation of a welfare reform program. Using the categories introduced at the beginning of the chapter, the following section translates those general categories into specific research questions that may be part of an implementation study of a welfare program.

What are the program goals, concept, and design? Are they based on sound theory and practice, and, if not, in what respects


This set of questions asks about the general and specific program goals and requirements, how the program is designed to help meet those goals, and the theoretical and practical connections between the program design and its goals. In addition to describing the program design and its underpinnings, researchers often are interested in understanding how the program was developed and why certain design choices were made. Although the process of program planning and development does not necessarily indicate whether the right choices were made, it allows some insight into the rationale behind those choices and the interests they represent. Questions of this type may include the following:

  • How was the program planned and developed? Who were the principal actors and what were their views about the program's goals and design? What compromises, if any, were made during program planning and development? Were those compromises sensible in view of the final program design?
  • What general and specific goals is the program designed to meet? Are the goals feasible on the basis of prior research and practice?
  • What are the federal, state, and local legal and administrative requirements guiding allowable and/or preferred policies, procedures, activities, and services? Are those requirements compatible with the program's goals?
  • How are the program's prescribed policies, procedures, activities, and services designed to advance the program's general and specific goals?
  • What is the design of "client flow" through the program? What administrative systems are required to support program operations? Are they feasible?
  • What are the levels and timing of activities, services, and client participation required to achieve the program's specific goals? Are they feasible on the basis of prior research and practice?
  • What are the theoretical and practical bases connecting the program's design to its goals? Are they sound, and, if not, how and why? How may the program's design be improved to help achieve its goals?

What are the types and levels of resources needed to implement the program as planned?

This set of questions addresses the resource issues implied by the program's design and its goals for client outcomes. This part of an implementation study may be a critical first step in assessing the practical feasibility of a program's design and its likely success. Among the questions falling under this rubric are the following:

  • What types of resources (e.g., physical plant, services, staffing number and expertise, and information systems) do the program's design and goals require? What levels of resources are needed to operate the program? What are the budgetary implications?
  • What are the sources of the program's needed resources? For example, what parts of the program's activities and services will be operated by the welfare department, by other agencies, or contracted out to private providers? To what degree are these arrangements already in place and to what degree will they have to be developed? What experience does the welfare agency have in coordinating services with other agencies and institutions?
  • Are the program's resource requirements likely to be met, given the program's budget, available resources, and resources to be developed? If not, how and why?

Is the program suited to its environment?

These questions address the relationship between the program and its demographic, social, cultural, political, and economic environment. Although sometimes treated as part of the investigation of a program's concept and design, the environment is also a critical dimension in understanding a program's operations and results. Among the questions arising out of this part of an implementation study are the following:

  • What are the demographic, social, cultural, political, and economic environments in which the program will operate? How does the environment differ across the state?
  • In what ways has the program's environment affected its implementation, operations, and results?
  • In what ways have variations in the program's local environment led to variations in program design and operations? What implications do these variations have for the program's goals and results?

Are the resources to operate the program in place and, if not, how and why?

This set of questions refers to the resource requirements for program administration and operations and asks whether those requirements have been met. As discussed previously, a new program may require some time to "get up to scale," and the answers to these questions may vary depending on when they are asked. Nevertheless, it is appropriate to ask these questions at any stage of a program's life, as performance may be vitally dependent on the level of available resources. Some specific questions that may be included in this category are the following:

  • Are all required resources available, including
  • Number, location, and infrastructure of local offices and/or other entry and contact points needed for the expected type and number of clientele.
  • Number and types of management and staff needed to maintain target caseloads and types and levels of service.
  • Number and types of activities and services planned to meet client needs and performance benchmarks, including services provided by other agencies and institutions.
  • Information systems needed for individual case management as well as for program management.
  • Which resources are in short supply and why? How does this vary by locality?
  • What implication does the level of resources have for operations and performance?

Are program processes and systems operating as planned, and, if not, how and why?

This group of research questions and the next form the basis for studies focused on program operations, or "process studies." The questions under this first group are concerned with the program operations. Although the distinction between questions about program operations and questions about scale and quality of services (the next group) is somewhat arbitrary, the two groups require different data collection and measurement techniques. The following are some examples of specific questions in this group:

  • Is the application process working as planned, including, for example
  • Do applicants know where to go as they enter the office building or program area?
  • Are applicants connected with eligibility workers as planned?
  • Do eligibility workers follow new procedures and apply new policies when conducting eligibility interviews?
  • Do workers communicate new eligibility standards and requirements to applicants?
  • Do workers process applicant information correctly and on time?
  • Are workers implementing the new policies and procedures regarding behavioral requirements for applicants and recipients, including, for example
  • Are workers making appointments with applicants and/or recipients for program orientations and/or assessments?
  • Are applicants and recipients being referred to appropriate services and activities?
  • Are workers performing prescribed applicant and recipient follow-up and monitoring?
  • Are ongoing eligibility and benefit policies being followed, including, for example
  • Are workers performing eligibility determinations using new procedures and policies?
  • Are workers collecting and acting on information about the financial and other eligibility status conditions of recipient families?
  • Are applicants and clients attending the various education, employment, training, work experience, workfare, and other referred activities and services? Are applicants and clients being provided with needed supportive services?
  • Are associated agencies and service providers delivering agreed-upon and contracted services according to program expectations and policies, as well as communicating client progress, outcomes, and failures to comply in an accurate and timely fashion to TANF program case managers?
  • Are information systems operating as planned, including containing required data fields and expected computational and presentational capabilities? Are the right people getting easy and timely access to needed data? Are workers updating data fields as expected? Is the information included in the administrative system being used as planned?
  • How and why are various program processes and systems not operating as planned? Does this vary by locality? What are the implications for program operations and performance and client outcomes? How may program processes and systems be improved?

Is the program reaching the intended target population with the appropriate services, at the planned rate and "dosage," and, if not, how and why?

As mentioned above, this group of research questions is concerned with the quantity, quality, and timing of program activities and services, as opposed to questions of whether or not the right activities and services are happening at all. This part of an implementation study is particularly important when considering the likely connection between program activities and services, and client outcomes. That is, if the program is likely to have its intended impact on families, it must provide prescribed services to sufficient numbers of clients within the expected time frame. Some specific research questions addressing these issues include the following examples:

  • How often are individual or group orientations scheduled? What proportion of clients scheduled for orientations actually attend those orientations? On average, how long do clients wait for a scheduled orientation? On average, how many attend orientations? What is the content and quality of the orientations?
  • What is the attendance rate for individual and/or group employability assessments?
  • What proportion of applicants and clients are expected to engage in work or a work-related activity? How many applicants and clients are engaging in work or a work-related activity? Are proper and timely compliance and sanctioning procedures being followed for applicants and clients who do not engage in work or a work-related activity? If not, how and why?
  • What work or work-related activities or services are clients assigned to or choose to engage in? For each type of activity,
  • What number and proportion of clients are assigned?
  • What number and proportion begin the activity or service?
  • On average, how long do clients wait before beginning the activity or service?
  • On average, what number and proportion of clients are engaged in an activity or service in a given time period?
  • On average, how long do clients remain in the activity or service? Is this for a longer or shorter period than expected?
  • On average, how many hours per week does the activity or service require? How many hours per week do clients spend on the activity or service?
  • What is the content and quality of the activity or service? Is it sufficient to meet program expectations? If not, how and why?
  • What supportive services are available to applicants and recipients? For each supportive service,
  • How many clients need or request the service?
  • How many receive or use the service?
  • Is the service available for all who need it? If not, why and how? On average, how long does someone have to wait for the service?
  • What is the content and quality of the supportive service? Is it sufficient to meet program expectations? If not, how and why?
  • What are the implications of the content, timing, and quality of services for program operations and performance? How does this vary by locality? How may services be improved?

Is the program achieving desired outcomes, and, if not, how and why?

A final set of questions focuses on program results, at both the "micro" and "macro" level. At the micro level, implementation researchers want to know what happens to individual applicants and clients as they pass through the program and are subject to its policies, procedures, behavioral requirements, and services. At the macro level, studies focus on overall changes in the administrative and institutional culture of welfare and whether the reform effort has substantially changed worker, client, and the public's attitudes and expectations about welfare. Some important questions about program outcomes include the following examples:

  • How many clients become employed, and in what time frame? What types of jobs in what industries are clients finding? What are the characteristics of clients' jobs, including, for example, wages, hours, and benefits? What is the record of job retention?
  • How many families are achieving financial independence (leaving welfare), in what time frame, and for what reasons?
  • What are the changes in families' sources of income and total income?
  • What are the changes in families' household and living arrangements?
  • What are the changes in children's well-being?
  • How and why do the above vary by locality? By demographic or other characteristics?
  • To what degree have client and worker expectations as well as the "culture of welfare" changed?
  • Is the program achieving its goals for client and other outcomes? If not, how and why? How may program performance be improved?

The Flexibility of the Implementation Research Agenda

This long list of research questions that guide implementation research is based on a provisional understanding of what the program is trying to do and how it is trying to do it. Beginning with specific questions helps the researcher identify the study's quarry—the data and data sources needed to answer the general questions: "What is happening?" "Is it what we expected?" and "Why is it happening as it is?" One great advantage of implementation research, however, is its flexibility. Because the study is not locked into testing a set of hypotheses specified beforehand, it can shift its focus as it gathers data and fleshes out the program's details. Some preliminary research questions may become moot and new questions may arise. Particularly in studies of new programs or of widespread changes in existing programs, researchers must be prepared to change their focus or adopt new lines of questioning as the program's actual shape emerges.

One example of the need to be flexible in an implementation research study is offered by an evaluation of a welfare reform demonstration in New York State tested in the 1980s—the Comprehensive Employment Opportunity Support Center (CEOSC) program (see Werner and Nutt-Powell 1988). CEOSC was an intensive education, employment, and training program for longer-term welfare clients with preschool children. The program was operated either by special units of a county's welfare department or by private not-for-profit social service agencies. Part of the rationale for this approach was to set CEOSC off from "business as usual" in the welfare department's regular income maintenance units.

Initially, the evaluation research plan focused on the administrative structure, activities, and services of the CEOSC program site or unit. The first site visits to CEOSC programs made it clear that participants' relationship with their eligibility workers was a critical dimension of program success. First, CEOSC participants relied on eligibility workers for reimbursements for transportation, childcare, and other training expenses. Second, and more subtly, the eligibility workers' acceptance of CEOSC was an important element in building participants' confidence in volunteering for, and remaining involved in, CEOSC. The inherent flexibility of a stand-alone implementation study allowed the researchers to add, change, or remove critical research questions as the study progressed.

Getting Started

Each succeeding chapter in this guidebook builds upon the previous chapters in developing a blueprint for an implementation study of welfare reform, although the approaches discussed are applicable to a wide range of programs. Table 1.1 includes some illustrative research questions and the type of inquiry each represents (i.e., documenting ["What is happening?"], assessing ["Is it what is expected or desired?"], or explaining ["Why is it happening?"]).


1. Occasionally, "implementation research" is used in the narrower sense of studies of the design, planning, and especially the initial operating phase, or start up, of a program. Sometimes, "implementation research" and "process research" are used synonymously, but "process research" is also used in a more limited way to refer to the study of a program's internal operations and relationships.

2. Note that an implementation study may be warranted even if programs are operating as planned. For example, well-operating programs may be improved or may deserve study because they can provide guidance to other agency operations or to similar programs elsewhere. Note also that a program may not be working as planned but may still produce its intended outcomes (and/or other beneficial outcomes). In these instances, implementation research may help shed light on how the program is working and whether it represents a better alternative to the initial program model.

3. Note that this question avoids phrasing that implies a program is "producing" or "causing" changes in outcomes (such as increasing employment among welfare recipients). This is in keeping with our definition of implementation research that includes "outcomes" but not "impacts." For example, it is legitimate to ask an implementation study to address the following question: "Do participants in the program's adult literacy class have higher reading scores after taking the class, and if not, what are some possible explanations why?" On the other hand, it would take an impact study to answer the following question: "To what degree are the reading scores of participants in the program's adult literacy class higher than they would have been in the absence of the class or with some alternative treatment?"

Sometimes the distinction between outcomes and impacts results in the mistaken impression that implementation studies must never deal with issues of causality. Implementation studies are usually limited to describing outcomes without being able to estimate how much (if any) of a change in client outcomes was caused by the program intervention. However, implementation studies also attempt to explain observed outcomes. That is, implementation studies investigate assumed causal connections between program services and client outcomes. For example, the design of an implementation study and its research focus is based on some notion—whether implicitly or explicitly—of how the parts of a program interrelate and how the program is supposed to change client outcomes. Except in the rare instances in which only the intervention and no other factor could possibly have affected given outcomes, however, an implementation study is not designed to estimate the degree to which observed changes are due to the intervention. Rather, an implementation study can only indicate that the program may or may not be changing client behavior and outcomes as predicted by the ideas underlying the program design. Implementation studies may also generate new hypotheses about alternative potential causal processes at work. But only a well-designed impact study can indicate, within established standards of statistical precision, whether an intervention is or is not having an impact on client behavior and outcomes.

4. Questions about a program's effectiveness in realizing its goals may not be the only reason to engage in implementation research. For example, even if a program appears to be working well, it may be important for implementation research to ask "How can the program be made more effective?" Similarly, if a program seems to be achieving exemplary results another important implementation study question may be: "Why does this program work so well and can it be replicated elsewhere?"

5. This is not to say that theory has no place in implementation research. Some approaches to implementation research are based on theories about how social programs operate (see chapter 4).

6. An important exception is "administrative modeling," discussed in more detail with examples in chapter 4. This approach uses statistical techniques that may, under the right conditions, indicate causal connections between given aspects of a program's organization or operations and program results. The important thing to note about this type of analysis is that it is not expected to allow policymakers to choose among alternative program or policy environments (as in a well-designed impact study), but rather to help policymakers and program managers make better administrative and operational choices within a given program and policy environment. See Mead (2003).

7. For the concept of a "paradigm shift," see Kuhn (1962) and Corbett (1997).

A Guide to Implementation Research, by Alan Werner, is available from the Urban Institute Press (paper, 6" x 9", 168 pages, ISBN 0-87766-724-1, $26.50). Order online or call (202) 261-5687; toll-free 800.537.5487.

Home to the Urban InstituteComments and questions may be
sent via email.