“The simple answer is that, in most cases, schools have made mistakes. In fact, this statement isn’t terribly helpful; after all, every school makes at least some mistakes. When it comes to data-driven instruction, however, the type of mistake that a school makes goes a long way toward determining whether or not it will succeed.” Paul-Bambrick Santoyo, Driven by Data: A Practical Guide to Improve Instruction. 

According to Bambrick-Santoyo, these are the eight mistakes that matter:

  • Inferior interim assessments
  • Secretive interim assessments
  • Infrequent assessments
  • Curriculum-assessment disconnect
  • Delayed results
  • Separation of teaching and analysis
  • Ineffective follow-up
  • Not making time for data

Yeah. So………..what if you are pretty sure that your school is making all of these mistakes?  Where do you even begin when you are expected to align curricula, overhaul assessments, and establish productive data teams….all at once?

In an ideal world, teachers have time to define what high quality assessments are and what high quality assessment systems do. They draft or locate assessments that align to the indicators of quality, and they test them in their classrooms. They have opportunities to use their increased expertise about design and what they learn from field-testing to make revisions. Then, the data that emerges informs their inquiry process as well as curriculum design.

In many of the districts that I serve, our levels of confidence about the assessments we are using and the data they are generating would be higher had this been our reality. We know that the curriculum and interventions we design can only be as effective as the interim assessments we are using to measure progress. Without that critical piece in place, attempting to use data to inform instruction is almost pointless. Right?

Well, quite a few of the teachers I work with thought so until we were forced to grapple with less than ideal circumstances.

The fact is, we didn’t have great interim assessments. So, we weren’t sure that any of the quantitative data we were looking at meant much of anything at all.

We knew this couldn’t let this stall our efforts to improve performance, though. So we started where we were and allowed real-time, in-the-moment, classroom-conducted formative assessment to guide our way.

Turns out that for many, this was a pretty good idea.

I’ll share more about what I learned from some of these experiences tomorrow, but I’m eager to hear from you, too.

  • How do we become increasingly confident about the quality of the assessments we are using and the data they produce?
  • How do we get better at analysis?
  • What stories can you share?
  • Whose work would you recommend?

Mr. Bambrick-Santoyo’s work was endorsed by the New York State Department of Education, and the text above was distributed widely throughout our region. I’ve learned a great deal from it and from my conversations with others, but I’m wondering: whose expertise could inform this work further or provide other powerful perspectives about what it means to design quality assessments and use data well?

 

 

 

 

 

 

 

Author

Write A Comment