Hora, Oleson Study Data Driven Decision-Making in STEM Departments
November 5, 2013
WCER researchers Matt Hora and Amanda Oleson have co-authored three new research reports spelling out recommendations to science, technology, engineering, and math (STEM) faculty and administrators on the use of data for improving curriculum and instruction.
Their recommendations are based on a preliminary analysis of fieldwork, finished in the spring of 2013, that included interviews with 83 faculty and administrators, as well as classroom observations of 59 STEM faculty at three large, public research universities in the U.S and Canada. The reports, co-authored by Jana Bouwma-Gearhart and Jennifer Collins, both of Oregon State University, summarize findings from the interviews and observations, in addition to surveys with students about their perceptions of instructional quality.
Hora and Oleson said their studies were motivated by the growing emphasis on data-driven decision-making in K-12 school districts, where data such as standardized test results are used to inform decisions about personnel, resource allocation, and curriculum. Yet little is known about how, if at all, STEM faculty and administrators actually use data to inform course design, evaluate instructional quality, and determine programmatic efficacy, they said.
The researchers first examined the routines that existed for designing and evaluating courses. They found that formal policies and procedures often existed for creating or substantially altering STEM courses, but that once created, these courses were revamped on an ad-hoc basis each semester. As a result, they found, most faculty planned and taught their courses with little to no oversight. Exceptions included cases where the presence of colleagues in team-taught courses or accreditation processes provided a layer of oversight and standards enforcement, or where curriculum committees determined the content for lower-level courses. Overall, however, STEM faculty operated with a high degree of autonomy in regard to their teaching responsibilities, with few organizational mechanisms or routines in place that could be used as part of a data-driven decision-making system, the researchers found. The studies also found that while teachers are regularly subject to end-of-term appraisals, the courses they taught were rarely evaluated to determine whether they had met learning goals set for their students.
However, many faculty do collect and analyze a variety of data to continually monitor student learning and the quality of their courses. These practices include traditional and online quizzes and exams, informal surveys of students, and the use of clicker-response systems that elicit student feedback and progress towards learning key concepts. But only rarely is this data analyzed as part of a formal procedure and incorporated as part of a continuous improvement system, and when it is, the method of incorporation is often left up to the individual instructor. Much more frequently, the information is referred to informally by instructors, as a way to monitor their students’ progress. Hora and Oleson suggested that these practices, while not rising to the level of formal data driven decision-making as advocated in K-12 districts and schools, nonetheless indicate that many faculty collecting, analyzing, and reflecting upon data from their own students.
While higher education has thus far avoided accountability-related policies that mandate data driven decision-making, the researchers caution that calls for implementation of such policies may be fast approaching. For postsecondary administrators and instructors to maintain some disciplinary and individual autonomy regarding their teaching practice and how it is evaluated, they must get ahead of these trends by developing their own data systems, the authors recommended. Such systems should build upon existing types of data use, while assisting faculty to articulate measureable learning goals, identify appropriate assessments and types of data to track progress towards these goals, and put in place systems for storing and managing data. Key to using the new data systems to full effect is the assistance of support staff to become “data helpers.” Ideally, these staff should have a high degree of literacy with pedagogical data in the discipline and should be available to help design formative and summative feedback systems for individual courses.
In contextualizing their findings, the authors emphasized that teachers will be reluctant to embrace continuous improvement systems unless they feel confident that the systems will not be used as punitive assessments of instructor quality. Moving forward, Hora, Oleson, Bouwma-Gearhart, and Collins plan to give these reports based on the ISOP framework to all participants in the study. They plan to monitor how the participating teachers respond to the report over the next two years, noting specifically changes made in course planning and teaching, student experiences, and the use of data in curricular decision-making. The four researchers said they hope their reports are the first step in improving upon the current paucity of data nationally in regard to teaching provided to STEM faculty.
For more information, visit http://tpdm.wceruw.org.