Authors: Huy Quoc Chung, Susan El Rowe, Hansol Lee
Presented at 2017 AERA
Abstract: Data driven instructional decision making is often the standard when it comes to educational innovation. However, teachers who often make and implement these decisions may have little experience with interpreting instructional data or barriers in their schools and districts exist that prevents teachers from using this data in effective ways (Hamilton et al., 2009; Ingram, Louis, & Schroeder, 2004; Kerr et al., 2006). This study examines teacher use of formative assessment data as a way to ameliorate excessive wait times or counter-productive assessment cycles. The underlying theory driving this work is teacher learning during professional development from authentic practices (Borko, 2004). Teachers are more likely to learn from professional development when they see the practices they implement to have an impact on their students’ learning (Guskey, 2002). Too often teachers do not see an immediate impact on their students so their willingness to change instruction often is limited. Providing more timely formative assessment data provides teachers with ongoing information about their students. We recruited 18 school districts, 20 schools, 23 teachers, and 1966 students across one state to participate in a randomized control trial to test the impact of teacher use of formative assessment data. Treatment teachers received asynchronous training on the purposes of formative assessments, how to use a study created formative assessment tool, formative assessment items/tests, rubrics, interpretation tools, and implementing suggested next steps. Treatment teachers gave students a pre-test. This pre-test was scored using our tools and teachers interpreted these scores while teaching our target Common Core mathematics standard of 8.EE.5&6. Teachers would then implement a mid-term assessment and repeat this cycle of scoring and interpretation. Our scoring tool grouped students under low, medium, or high understanding of different conceptual topics such as decimals/fractions, solving equations, and the ability to provide explanations to support answers. The tool also helped teachers see patterns of errors students were making to help them address misconceptions. After the unit is completed teachers would provide students with a post-test. Control teachers only administered the pre and post-tests for comparison purposes. A simple test of means established differences in performance at post-test. Results indicate that the treatment group outperformed their control peers at posttest (p<.001). Treatment students also had greater gains from pre to post indicating the effectiveness of constant feedback. The additional use of the mid-term assessment helped teachers and students make adjustments in terms of learning standards. The goal of these phases is to create a comprehensive understanding of formative assessment practices and how to best disseminate them and to study their potential impact on classroom instruction. Teacher feedback demonstrated willful use of formative assessments in the classroom, but time spent on our scoring tool and interpreting results was cited as an area in need of improvement.