How do principals assign students to teachers? Finding evidence in administrative data and the implications for value-added

Steven Dieterle, Cassandra M. Guarino, Mark Reckase, Jeffrey Wooldridge

Research output: Contribution to journalArticlepeer-review

Abstract

The federal government’s Race to the Top competition has promoted the adoption of test-based value-added measures (VAM) of performance as a component of teacher evaluations throughout many states, but the validity of these measures has been controversial among researchers and widely contested by teachers’ unions. A key concern is the extent to which nonrandom sorting of students to teachers may bias the results and lead to a misclassification of teachers as high or low performing. In light of potential for bias, it is important to assess the extent to which evidence of sorting can be found in the large administrative data sets used for VAM estimation. Using a large longitudinal data set from an anonymous state, we find evidence that a nontrivial amount of sorting exists—particularly sorting based on prior test scores—and that the extent of sorting varies considerably across schools, a fact obscured by the types of aggregate sorting indices developed in prior research. We also find that VAM estimation is sensitive to the presence of nonrandom sorting. There is less agreement across estimation approaches regarding a particular teacher’s rank in the distribution of estimated effectiveness when schools engage in sorting.
Original languageEnglish
Pages (from-to)32-58
JournalJournal of Policy Analysis and Management
Volume34
Issue number1
Early online date12 Jul 2014
DOIs
Publication statusPublished - 30 Jan 2015

Fingerprint

Dive into the research topics of 'How do principals assign students to teachers? Finding evidence in administrative data and the implications for value-added'. Together they form a unique fingerprint.

Cite this