Showing posts with label PLC. Show all posts
Showing posts with label PLC. Show all posts

Friday, February 25, 2011

Data-Driven Decision-Making Kills Crickets!

“Diana Virgo, a math teacher at the Loudoun Academy of Science in Virginia, gives students a more real-world experience with functions. She brings in a bunch of chirping crickets into the classroom and poses a question:” So begins a story related in the book “Made to Stick” by Chip and Dan Heath. They applaud the teacher for providing a concrete lesson to understand the notion of a mathematical “function.”

I learned a different lesson altogether from this story. After gathering all the data relating to chirp rates and temperature, the students plug the information into a software package and--- AHA! The hotter it is, the faster crickets chirp and even better, IT’S PREDICTABLE! Now students have a concrete example of what a function is and what it does. Next comes the point where the story grabs me. The Heath brothers mention (in parenthesis no less, even calling it a side note, as if this isn’t the main point) that “Virgo also warns her students that human judgment is always indispensible.” For example, if you plug the temperature 1000 degrees into the function, you will discover that crickets chirp really fast when it is that hot.

The moral of the story is this: Data-driven decision-making kills crickets!

Unfortunately, it can also kill good instruction. Recently while attending a district-wide work-session on Professional Learning Communities, a nationally recognized consultant suggested reasons why teachers at a small middle school without colleagues in the same subject should collaborate with teachers from other schools in the same subject. He suggested that when these inter-school teams see that one teacher has better data in a given area, the others could learn what that teacher is doing to get such good results.

I’m not against this type of collaboration, but could it be possible that a teacher from one school whose student testing results (data) are not so good is still better than a teacher in a different school with excellent data? For example, might the data at school A look better than school B because students are getting better support at home. Perhaps school B spends more time making sure students are fed and clothed before concentrating on the job of instruction. What if school A has stronger leadership and teacher performance reflects teacher moral, support, or professional development?

Teachers must collaborate and share stories relating to instruction that works, but if student test-data is the only metric used to evaluate effectiveness we are essentially determining that crickets chirp very fast at 1000 degrees. There is a better choice than “data-driven.” Next week I’ll share my thoughts on this alternative and together we can strive to “save the crickets.”

Follow-up Post: Why Data-Informed Trumps Data-Driven