In randomized controlled trials (RCTs), also known as experiments, participants are randomly assigned to treatment and control groups; the treatment group receives the proposed intervention and the control group does not. The impact of the intervention is then measured by differences in outcomes between the two groups. Because the groups are randomly assigned, they do not differ in any systematic way that might explain the difference in outcomes.
RCTs can often provide the strongest causal evidence about a program’s effectiveness. In an RCT, differences in outcomes between treatment and control groups can be attributed, on average, to program availability and participation rather than to differences in unobserved characteristics between program participants and nonparticipants.
Participants are selected at random from the same population, which can be anything from individuals to entire neighborhoods or school districts. Such a process helps make the treatment and control groups equivalent in motivation, ability, knowledge, or socioeconomic and demographic characteristics. The control group provides the counterfactual that helps observers understand what would have happened to the treatment group had it not received the intervention.
While RCT studies require a larger investment of resources and more planning than many other types of studies, they are often needed to determine causal impact in the absence of any natural experiment in which exposure to the program or policy arguably resembles random assignment.
RCT specification choices
RCTs first took hold as a research strategy in the medical field. Today, they are widely used for a broader array of social sciences and social policy research. However, experimentation in the social sciences cannot typically be undertaken in a controlled laboratory, and implementing an RCT in the real world creates challenges that must be dealt with and controlled for.
In some cases, a comparison of mean changes in outcomes between the treatment and control groups in an RCT can provide an estimate of the causal effect of the program of interest. However, even in an RCT, sampling variation can lead to differences in the average characteristics of treatment and control participants, particularly in small samples. These differences may then lead to distinct outcomes between the two groups that cannot not be attributed to the effects of the program. Regression-based approaches can then be used to control for these measured differences.
Control variables
One decision when analyzing RCT data is whether to include control variables. Including control variables can reduce the variance and increase the precision of the impact estimates when outcome variables are correlated with observable factors such as age or education level. The most important control variable to include in the analysis of RCT data is the baseline level of the final outcome variable.
Including control variables that are strongly correlated with the outcome variable can reduce the amount of unexplained variance and sample size needed to detect an effect. However, including covariates that are influenced by the treatment can create bias in the estimates. In addition, including too many control variables is likely to reduce, rather than increase, the precision of the estimate.
This usually means that control variables must be collected before randomization occurs.
Blocking or stratifying
Another way to improve the precision of the causal estimates in an RCT is through blocking or stratification, where individuals are grouped based on some combination of their baseline characteristics and then randomized within each of these blocks or stratums. Estimates are then taken within each block or stratum and pooled together across blocks. In extreme cases of blocking, there are only two members per block; this situation is termed matching. A simple way to estimate an RCT with blocking is to include a series of block indicator variables to the regression and suppress the intercept.
Intent to treat versus treatment on the treated
Many RCTs experience difficulty with recruitment or attrition of study subjects. Not all subjects that are offered the treatment will accept it or complete it. Therefore, the effect on the whole group may differ from the effect on only those who received the full treatment.
When designing an RCT, the researcher must decide whether to estimate the intent to treat (ITT), the treatment on the treated (TOT), or both. The ITT estimates the average effect of offering the treatment on outcomes, or the effect on everyone who was offered the treatment, whether or not they received it. The TOT estimates the average effect of the actual treatment on outcomes, or the effect only on those who received the full treatment. In some cases where program participation is voluntary, the ITT may be the more policy-relevant effect. In others, researchers may be interested in understanding the effect of the intervention on everyone in the population.
Subpopulation analysis and instrumental variables
In many cases the individuals who do not accept or finish the treatment differ systematically from those who do, making the results internally valid only to the group who received the treatment. Two main techniques can be used to reduce the bias caused by these systematic differences. The first is subpopulation analysis, in which the effect is estimated only for those groups of people who were more likely to accept the treatment. These groups must be defined, however, by characteristics that they exhibited before randomization occurred. For instance, if individuals who live in a certain geographic area are less likely to complete the treatment, then all individuals from this geographic area are dropped from both the treatment and control groups.
The second way to account for imperfect compliance and/or differential attrition is an instrumental variables approach. In this technique, randomization into the treatment group is used as an instrument for the actual treatment. This approach can help remove some of the bias caused by selection into take-up. However, this approach is only valid if being assigned the treatment has no impact on the outcome aside from actually receiving it.
When is an RCT appropriate?
RCTs are a powerful tool for social science research, but they are not appropriate for certain scenarios. In some cases, it is unethical to deny treatment to individuals; in others, it is infeasible to do so (this is often the case with place-based interventions).
RCT evaluations can be time-intensive and expensive, and many programs may not have the operational capacity or client volume to justify participation in such a study. Even well-established programs with large client bases may find it difficult to participate in an evaluation without external support. Funding supporting these evaluations is limited, putting more pressure on the studies that are conducted to be well-planned and well-executed.
And, of course, accurately capturing the impact of a particular intervention (its internal validity) does not ensure that the findings are generalizable to other people, programs, geographies, or timeframes (its external validity).
Though RCTs pose their own unique challenges, they are often the best, and sometimes the only, way to draw conclusions about an intervention’s causal impact.
Related research
The DNA Field Experiment: Cost-Effectiveness Analysis of the Use of DNA in the Investigation of High-Volume Crimes [/research/publication/dna-field-experiment]
What is the best way to provide financial literacy education? [/research/publication/randomized-controlled-trials-and-financial-capability]
Prepaid Cards at Tax Time and Beyond: Findings from the MyAccountCard Pilot [/research/publication/prepaid-cards-tax-time-and-beyond-findings-myaccountcard-pilot]