Abstract
Matching and Difference in Difference (DID) are two widespread methods that use pre-treatment outcomes to correct for selection bias. I detail the sources of bias of both estimators in a model of earnings dynamics and entry into a Job Training Program (JTP) and I assess their performances using Monte Carlo simulations of the model calibrated with realistic parameter values. I find that Matching generally underestimates the average causal effect of the program and gets closer to the true effect when conditioning on an increasing number of pre-treatment outcomes. When selection bias is symmetric around the treatment date, DID is consistent when implemented symmetrically—i.e. comparing outcomes observed the same number of periods before and after the treatment date. When selection bias is not symmetric, Monte Carlo simulations show that Symmetric DID still performs better than Matching, especially in the middle of the life-cycle. These results are consistent with estimates of the bias of Matching and DID from randomly assigned JTPs. Some of the virtues of Symmetric DID extend to programs other than JTPs allocated according to a cutoff eligibility rule.
Keywords
Matching; Difference in Difference; Job Training Programs;
JEL codes
- C21: Cross-Sectional Models • Spatial Models • Treatment Effect Models • Quantile Regressions
- C23: Panel Data Models • Spatio-temporal Models
Replaces
See also
Published in
Journal of Econometrics, vol. 185, n. 1, March 2015, pp. 110–123