The idea in panel regression is to use an individual unit as its own comparison group by comparing changes over time or some other dimension instead of comparing units that are fundamentally different, some of which are treated and some not. There is a wide variety of panel methods, some of which are discussed by Nichols (2007) and many more by Singer and Willett (2003) or Skrondal and Rabe-Hesketh (2004). The two most common methods are a difference-in-difference regression and a fixed-effect model.
The prototypical difference-in-difference regression compares two types of units, some that are treated and some that are not, before and after the start of treatment (“treatment” here means the explanatory factor of interest). If there is an effect due to time (maturation), that effect is captured by the mean change among the untreated units; the change among treated units captures both the maturation effect and the treatment effect. By subtracting the difference over time among untreated cases from the difference over time among treated cases (the difference in the differences), we obtain an estimate of the treatment effect under the assumption that treatment is not assigned based on patterns of expected change over time.
That is, if low-performing units are assigned treatment, a cross-sectional comparison in the posttreatment period would be subject to negative selection bias (and high-performing units differentially selected into treatment would produce positive selection bias), but a panel regression eliminates the bias. However, if units with lower expected future growth are differentially selected into treatment, difference-in-difference methods do not eliminate the bias.
Likewise, a fixed-effect model compares only deviations from mean outcomes by observational unit, and deviations from mean explanatory factors by observational unit, eliminating any factors that do not differ over time (or another dimension across which outcomes and explanatory factors vary by observational unit). If the treatment is assigned based on expected change (Rothstein 2009, 2010), however, the fixed-effect model does not eliminate bias due to unobserved factors. More complicated models are then required, or a model with greater internal validity should be employed, such as instrumental variables or regression discontinuity models.
Nichols, Austin. 2007. “Causal Inference with Observational Data.” STATA Journal 7 (4): 507–41. [http://www.stata-journal.com/article.html?article=st0136]
Rothstein, Jesse. 2009. “Student Sorting and Bias in Value-Added Estimation: Selection on Observables and Unobservables.” Education Finance and Policy 4 (3): 537–71. [http://www.mitpressjournals.org/doi/abs/10.1162/edfp.2009.4.4.537#.U3Ov8RBFYwg]
———. 2010. “Teacher Quality in Educational Production: Tracking, Decay, and Student Achievement.” Quarterly Journal of Economics 125 (1): 175–214. [http://qje.oxfordjournals.org/content/125/1/175.short]
Singer, Judith D., and John B. Willett. 2003. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence. New York: Oxford University Press. [http://gseacademic.harvard.edu/~alda/]
Skrondal, Anders, and Sophia Rabe-Hesketh. 2004. Generalized Latent Variable Monitoring: Multilevel, Longitudinal and Structural Equation Models. Boca Raton, FL: Chapman & Hall/CRC. [http://www.gllamm.org/books/]