Next Generation of Value-Added Models and Indicators
Given the growing importance of outcome-based accountability systems and the widespread availabilty of longitudinal student assessment data, the time is ripe for a project focused on developing the next generation of value- added models and indicators. This methodology has emerged as the most promising strategy for measuring the performance of schools, teachers, programs, and policies. This research targets statistical and conceptual problems that need to be addressed if value-added statistical models are to be used to produce valid and useful measures of school and program performance. The project will develop web-based software to enable all school districts to pilot and implement value-added systems. The project is working on providing solutions to the following six major statistical/conceptual problems:
- Weak and limited information on students, the norm in administrative data sets, results in sample selection bias in standard estimates of school and program productivity (apart from studies based on randomly assigned programs). The preferred approach (in the common case in which it is not possible or desirable to randomly assign students to schools) is to explicitly model selection bias and identify the data structures and estimation techniques that support elimination or reduction of that bias.
- Conventional value-added measures are insufficiently equity-oriented: they fail to measure whether schools perform differentially for different population groups (high and low achievers, poor and non-poor, etc.). The preferred approach explicitly models the way in which district and school performance varies with prior achievement and other characteristics. This approach enables states and districts to monitor and set explicit value-added performance objectives for schools with low-scoring students and other policy-significant groups.
- Value-added indicators of school performance are typically disconnected from explanation, diagnosis, and evaluation and thus are often ignored. Schools want answers to the following (and other probing) questions: Why does my value-added performance differ from average attainment? What can I do to raise my value-added performance? The preferred approach embeds measurement, diagnosis, and explanation in a unified value-added system.
- Evolving and/or mismatched assessments and test scores measured on different scales confound measurement of achievement growth and thus threaten the validity of conventional value-added results. The preferred approach is to generalize value-added tools so that they can encompass multiple test scales and tests with overlapping but not identical content coverage.
- Students who move into a district or state after the administration of regular standardized assessments cannot be included in growth analyses unless the unrealistic assumption is made that these students and non- mobile students have similar population characteristics. The preferred approach is to assess these students at the point of entry and incorporate them into an appropriately generalized value-added model.
- Value-added indicators and evaluation tools are rarely, if ever, designed to mesh with indicators and performance targets of the No Child Left Behind Act (NCLB). The preferred approach is to join the two in a coherent and supportive manner.
Funding
U.S. Department of EducationInstitute for Education Sciences
Project Website
http://varc.wceruw.orgStatus
Completed on October 31, 2008Contact Information
Robert MeyerPhone: (608) 265-5663
rhmeyer@wisc.edu