I learned a different lesson altogether from this story. After gathering all the data relating to chirp rates and temperature, the students plug the information into a software package and--- AHA! The hotter it is, the faster crickets chirp and even better, IT’S PREDICTABLE! Now students have a concrete example of what a function is and what it does. Next comes the point where the story grabs me. The Heath brothers mention (in parenthesis no less, even calling it a side note, as if this isn’t the main point) that “Virgo also warns her students that human judgment is always indispensible.” For example, if you plug the temperature 1000 degrees into the function, you will discover that crickets chirp really fast when it is that hot.
I’m not against this type of collaboration, but could it be possible that a teacher from one school whose student testing results (data) are not so good is still better than a teacher in a different school with excellent data? For example, might the data at school A look better than school B because students are getting better support at home. Perhaps school B spends more time making sure students are fed and clothed before concentrating on the job of instruction. What if school A has stronger leadership and teacher performance reflects teacher moral, support, or professional development?
Teachers must collaborate and share stories relating to instruction that works, but if student test-data is the only metric used to evaluate effectiveness we are essentially determining that crickets chirp very fast at 1000 degrees. There is a better choice than “data-driven.” Next week I’ll share my thoughts on this alternative and together we can strive to “save the crickets.”
Follow-up Post: Why Data-Informed Trumps Data-Driven
Since we both work in a basement we are very familiar with crickets. Specifically the house cricket, mole cricket, cave cricket and field cricket. You will have a tough time saving them given our other frequent visitor the house centipede aka ”mustache bug” aka “thousand legger”. They can live to be 7 years old...who knew? They are the ones that really disrupt our classes when they scoot across the floor of room...they get a more immediate and vocal reaction and as the teacher you usually have to stop and do something to resolve their presence. But they are beneficial insects and eat other bugs including harmful pests. Perhaps analogous to the side effects of much done in the name of improving education and teaching.
ReplyDeleteThere are indeed many variables in the process of educating our kids and when we lose sight of that fact we do so at great peril. The challenges facing each student, teacher and classroom are unique. Trying to make classes the same by misusing data is a mistake. Remember that every cricket or other bug you see and analyze is only one part of a complex web. Just as in our schools there is a lot more going on than what you can "see" by using data.
You forgot the Spricket-- the infamous spider/cricket hybrid that can eat a mouse. Maybe we should gather some data and find out the best way to remove a bug from your classroom while causing the least disruption among students.
ReplyDeleteYou said "variables" that's a real key. And not very much educational research involves strict control of variables. That's not necessarily a bad thing-- creating control and experimental groups of students could lead to all sorts of problems. Without this type of research, we cannot let data DRIVE our decisions, it must INFORM our decisions. (That's a teaser for the next post.)
I'm afraid I've got to disagree with you on this one. Data-driven solutions to America's education problems are the best we've got. It provides us with an objective criteria on which to base our decisions. If we minimize the importance of data in creating solutions to our education problems, we risk coming up with faulty ideas which have little relevance to the realities of our educational system.
ReplyDeleteThere are certainly problems with data driven solutions. You can have faulty data, interpret the data incorrectly, or reach different conclusions from the same set of data. The thing is, these problems are only magnified if we choose to give more subjective criteria primacy in our decision making. Data gives us something concrete to look at, something we can actually use to PROVE our assumptions.
Now, I'm not arguing we forgo subjective criteria entirely. It is plainly obvious that data does not, and can not, provide the full story in any situation. We've got to take into account factors unique to each school and teacher. My fear is, if we make these more subjective criteria the primary instrument in making our decisions then we will only widen our possibility of error.
I'll take the mustache bug over the spricket head to head. Can you factor in variables like when we teach in a room with no lights?
ReplyDeleteThought this an interesting read in the paper today:
http://www.washingtonpost.com/wp-dyn/content/article/2011/02/25/AR2011022505002.html
"TFA has become a flourishing reproach to departments and schools of education. It pours talent into the educational system - 80 percent of its teachers are in traditional public schools - talent that flows around the barriers of the credentialing process. Hence TFA works against the homogenization that discourages innovation and prevents the cream from rising."
Although I take some exception to the idea that being in TFA defines you as talented, it does mean you care at some level about what you are doing...a thread common among any effective teachers...and they have a well thought out approach to simply making things better I also get somewhat defensive about totally dismissing the credentialing process. How bout we do the same for editorial writers, pilots, doctors, day care centers? TFA does share a somewhat cautious approach to data: http://www.ascd.org/publications/educational-leadership/dec10/vol68/num04/Leadership,-Not-Magic.aspx
(I laughed at the part about taking out the desks)
But their search for answers about effective teaching in an interesting one.
In Will's piece he mentions "This year TFA will select 5,300 from 48,000 applicants, making it more selective than most colleges." What does that statistic really mean? TFA is pretty neat especially in the way they link members together in a network of support. They do untold good for some of our nations most needy kids and deserve commendation. But if TFA is viewed as "the solution" and applied on a larger scale what variables will change there? When teaching in a different environment would these gains be more modest? Is the members effort sustainable for an entire career? What about this selectivity? How's that affect things? I think the leadership part would very much still apply no matter who the teacher is or where they come from. But what does it mean for crickets? I am skeptical of "easy" solutions when to such a complex human problem.
Maybe TFA could start sending some folks into schools and "training" or sharing about some of the things they've learned. It has to better than our normal PD. But I am wary of people that have solutions that are not teachers or teaching...not sure why. Just how I feel. 2 or 3 years is simply not reflective of any career. Interesting that TFA is searching hard for a quantifiable way(data to a large degree) to predict whether applicants will succeed and more importantly why. If they go too far down that road then they can only take women since data shows they will likely be more effective.
I found this a good article as well
http://www.theatlantic.com/magazine/archive/2010/01/what-makes-a-great-teacher/7841/
Check out the last line on the 2nd page of that article.
This comment reflected my emotions pretty well
http://www.theatlantic.com/magazine/archive/2010/01/what-makes-a-great-teacher/7841/#comment-45396645
To Anonymous,
ReplyDelete1) Your comment was spammed for some reason, I just released it. I didn't want you to think we were editing based on your P.O.V.
2) Thank you so much for disagreeing. I honestly hate the back-scratching that goes on in the blog world so frequently. I know that Lindsay and I do that in this space, but we obviously think a little alike considering we started this blog together.
So, I'll be the first to admit that I am a fan of the subjective. Primarily, I don't think I'm anti-objective as much as I just don't think that objectivity is as possible as we think. Sometimes it is better to lay our biases on the table and work with them in such a way that everyone involved can understand them.
Your second paragraph is what really hits me. Data should not be used to prove our assumptions as it usually is. If properly used, data should shape and inform the assumptions we should use to move forward. I'm not sure what your background is, but in the education world, too often I see leaders with very strong assumptions that Precede the collection and interpretation of data.
In the end, we may not differ as much as it seems on the surface. I plan to follow this post up with what I think is a better way, "data-informed" not "data-driven." I'd love to hear what you think.
To Rich: Thanks for the links regarding TFA, that could be a good topic to explore in later posts.
Nothing worthwhile can ever be "proven". I've see data driven decisions as a way for decision makers to deflect any criticism and even to justify preconceived agendas which bottom line harm teaching. As a older teacher data is overrated. Not because of the three main problems you list, but because the data is almost always misused by those who control education. We teachers should use data as you suggest, and do so to inform what we do...any good teacher does this. But in the hands of those not working in the classroom it is a scary weapon and one that has caused me untold harm and countless hours of trouble.
ReplyDeleteWow Anonymous! I've totally convinced you to change your mind. Seriously, I don't know if folks know that you can post as a name/url without entering a URL which allows you to stay anonymous, but still provide a little identification.
ReplyDeleteThanks for the comment.
The primary reason "data driven" is being pushed is the extraction of profit. When you have small class sizes and teachers can collaborate/communicate on the needs of individual students as well as their own teaching methods, you have a real time, data driven system operating within the most powerful computing platform on the planet, the human brain. It's a matter of trusting teachers skill, professionalism and motivations, which I do. Distrust of all that is the implicit message to the public in the push for evaluating teachers based on student growth. Tests should be used, as was stated earlier, to inform teachers on their students needs. If class sizes are small enough, this knowledge would be available on a moment by moment basis based on the relationships between students and teachers. Teachers could then test as needed to confirm this for themselves as they saw fit. The size and complexity of the data systems that would actually be needed to realize their stated goals is in all probability asymptotic. I look at the work needed for programs like Deep Blue (chess) and the one written to play Jeopardy which were dedicated to a single non dynamic though complex task and shudder to think of the never ending bug fixes and inaccuracies we will have to endure in the very dynamic environment of educating varied populations whose members do not remain static. As has been seen in NYC with the release of VAM based teacher ratings, the claims made to justify the time and resources spent were hollow, as the whole endeavor resulted in having nothing useful to show for being "data driven". No useful data was produced.
ReplyDelete