Web Survey Bibliography
When evaluating a collection of various job-training programs from nonrandomized data on people who have been trained and not trained, the most critical piece of data is what would have happened to all those who received some training in the abscence of that training. That is, for each episode of unemployment, how much time would it have been taken until the person found employment and what kind of employment if the person had not been job trained. For episodes that do not end with entrance into a job training program, the answer to this question is known, but for episodes that end with entrance into a job training program, this answer is not known because the episode is censored by the training and so must be inferred with uncertainty data from other episodes. Our plan is to multiply impute the missing (right censored) outcomes of such episodes to create a "null" data set, that is, a data set that represents what would have happened without any job training programs. The multiple imputations represent the uncertainty in these estimations, and allow for straightforward analysis to answer many questions. The creation of such a multiply-imputed data set, however, is a massive task, requiring innovative and complex algorithms that seek to find for each censored episode a donor pool of matching episodes with longer times before job training. This presentation will briefly describe this algorithm.
Web Survey Bibliography - Rubin, D. B. (3)
- The Use of Multiple Imputation to Create a Null Data Set from Nonrandomized Job Training Data; 2005; Rubin, D. B.
- Statistical Analysis with Missing Data (2nd Ed.); 2002; Little, R. J., Rubin, D. B.
- Reducing Bias in Observational Studies Using Subclassification on the Propensity Score; 1984; Rosenbaum, P. R., Rubin, D. B.