IMG_3595

 

When I first began learning how groups of teachers might use data to empower practice and serve kids better, most of the models I studied felt very linear and seemed to be driven by trends in standardized assessment data. A decade ago, I used to spend quite a bit of time helping teachers make meaning from this kind of data, and then we would design common formative assessments that were much like the tests that produced this kind of data. Then? We would feel all kinds of demoralized when scores did not improve, even after we analyzed the results and did our best to intervene. We were data-driven, after all.

We were also burned out.

And we hated what this approach was doing to kids.

That was many years ago.

I’ve spent the last week working with middle and high school inquiry teams who are using data to inform their hunches about how to best support learners.When I began my work with this district three years ago, I interviewed teachers. I spoke with kids. I listened to what they told me about their perceived strengths and needs. Then, I examined trends in state assessment data. These were the data we had at the time. Not the best, to be sure. But it’s where we were. And we established some solid hunches about where to begin our inquiry work–because I talked with kids and teachers rather than relying on scores alone.

We began examining research and promising practices together. Teachers began sharing what they learned from their studies and what they learned from their students when they tested new practices in their classrooms. Our understanding of how kids struggle, and more importantly–why they were struggling—became increasingly refined. Formative assessment helped us gain confidence about the path we are on and our abilities to intervene in just the right way.

This remains messy, hard, and beautiful work.

This week, a teacher who is new to this process asked that I sketch a simple diagram of what inquiry teams do. I created the poster above, and a conversation about “multiple measures” quickly ensued:

  • Which measures are best?
  • How do we know?
  • What data do we have?
  • How should we use it?
  • What does it tell us?

We don’t know what measures are best. It’s possible that we have too much data. If we already have evidence-based  hunches about where learners struggle, those hunches can guide our data analysis. We can pull on the data that empowers our inquiry best rather than drowning ourselves in every point we have available first. Data don’t tell us anything. We can begin our work as inquiry team anywhere within the cycle above. I’m learning it isn’t linear, and the less confident we are in the quality of the data that we have and the process we are using, the more likely our questions will give us pause:

  • What else do we need to know?
  • Which data provide that information?
  • How can we keep getting better at this?

 

Author

Write A Comment