Assessing Bias in Value-Added Models
Paper Session
Sunday, Jan. 8, 2017 3:15 PM – 5:15 PM
Hyatt Regency Chicago, Field
- Chair: Philip Gleason, Mathmatica Policy Research
Validating Teacher Effect Estimates Using Changes in Teacher Assignments in Los Angeles
Abstract
We evaluate the degree of bias in teacher value-added estimates from Los Angeles using a “teacher switching” quasi-experiment proposed by Chetty, Friedman, and Rockoff (2014a). We have three main findings. First, we confirm that value-added is an unbiased forecast of teacher impacts on student achievement, and this result is robust to a range of specification checks. Second, we find that value-added estimates from one school provide unbiased forecasts of a teacher’s impact on student achievement in a different school. Finally, we document systematic differences in the effectiveness of teachers by student race, ethnicity and prior achievement that expand gaps in achievement, rather than close them.Does It Matter How Teacher Effectiveness Is Measured? Assessing Bias in Alternative Value-Added Models Using Data From Multiple Districts
Abstract
t We measure the amount of bias in value-added estimates using the approach developed by Chetty, Friedman, and Rockoff (2014) that is based on teacher transitions. This approach compares changes over time in average test scores within schools to changes in the value-added estimates of teachers in the schools. We extend previous applications of the method by applying it to data in multiple districts and several value-added models. Our data include a geographically diverse set of 20 districts of varying size with teacher value added in the five school years from 2008-2009 to 2012-2013. We compare bias in value-added models that include or exclude some common features such as accounting for measures of students’ peers in the same classroom or addressing measurement error in pre-test scores. We then examine how using a value-added model identified as more biased affects teachers of certain students, such as disadvantaged students. Finally, we describe some pitfalls to avoid when applying the method to small or moderate sized districts or short panels, and contribute to a discussion about potential threats to the validity of the bias estimates.Discussant(s)
Kirabo Jackson
, Northwestern University
Richard Mansfield
, Cornell University
JEL Classifications
- I2 - Education and Research Institutions