"Developing A System of Practical Measures, Routines, and Representations to Inform and Enhance Instructional Improvement Initiatives"
American Educational Research Association (AERA) Annual Meeting
Theme: Leveraging Educational Research in a “Post-Truth” Era: Multimodal Narratives to Democratize Evidence
April 5-9, 2019
Title: Developing A System of Practical Measures, Routines, and Representations to Inform and Enhance Instructional Improvement Initiatives (Paper)
Session: The Scholarship of Improvement: Building Community Around an Emerging Tradition of Practice-Focused Research
Authors; Kara J. Jackson, Paul A. Cobb, Erin Craig Henrick, Thomas M. Smith, June Ahn, Marsha M. Ing, Hannah Nieman, Nicholas M. Kochmanski, Fabio Campos, Starlie Chinen, Daniela Digiacomo, Maria Hays, Emily C. Kern, Meaghan Beth McMurran
Abstract: This poster focuses on a key aspect of improvement research, the development and use of practical measures to inform improvement efforts (Bryk, Gomez, Grunow, & LeMahieu, 2015). Specifically, we report on an ongoing effort to develop practical measures of (1) aspects of high-quality mathematics instruction linked to student learning, and (2) supports for teachers to improve the quality of instruction and students’ learning.
We first describe criteria for effective practical measures that differentiate them from research and accountability measures, including that they should fit with rather than disrupt practitioners’ current practices and can thus be used frequently to provide ongoing feedback. We also give an overview of the envisioned system of measures for supporting instructional improvement efforts, together with the associated routines, and representations that five research-practice partnerships (RPPs) are developing collaboratively.
We illustrate our process for developing practical measures and investigating their use by focusing on a measure of the quality of whole-class discussions that takes the form of a short student survey. We describe how we drew on existing research to develop initial items and how we iteratively tested and revised the items until students’ responses (aggregated at the classroom level) matched expert assessment of the quality of the corresponding aspects of discussions.
In one RPP, mathematics specialists used the measure to inform a curriculum writing initiative aimed at improving the rigor of lessons and units. As part of the effort, six teachers (grades 6-8) routinely piloted newly written lessons and administered the survey measures to their students (total of 56 administrations). The resulting data provided feedback to the mathematics specialists regarding the quality of discussions associated with specific lessons. Researchers audio-recorded sessions in which mathematics specialists discussed the data and revised lessons; and coded the rigor of the written lessons using the Instructional Quality Assessment (Boston, 2012). Analysis indicates the data provided by the measure supported productive revisions of the written lessons.
In a second RPP, the measure has been used to enhance coaches’ work with teachers. We are investigating coaches’ use of the measure as they conduct coaching cycles with individual teachers by observing and live coding focal lessons, by audio-recording the coaches’ and teachers’ initial planning conversations and subsequent debriefing conversations, and by audio-recording semi-structured interviews with coaches and teachers. The analysis of the resulting data clarifies the specific ways in which the coaches’ use of the surveys enhanced their efforts to support teachers’ learning, thereby serving as supports for as well as measures of improvement.
These results illustrate the contribution that practical measures can make to instructional improvement efforts in core content areas by serving as levers for (and measures of) improvement.