This conclusion was drawn from a research study conducted by three Harvard and Columbia economists that definitively connect career earnings to a student's access to a good teacher in grades 4-8. So they have essentially proven something common sense says is true. Teachers matter.
Of course they do. There are good ones, and then there are some not so good. Just as there are good ways to use research and some not so good. The basic problem I have with this research is they define a good teacher by using student test scores alone. If the same was done in my or any other school I am 100% certain that data would be misleading. We don't all teach the same level and thus not all the same kids with the same learning needs. They might not have the same goal in mind. By such logic it could be argued that to some degree, the students we teach define us as teachers. So what remains unclear despite this study is how to best measure quality.
Such an approach using testing to identify "good" teachers assumes cause and effect. It is then parlayed into the dreaded Value Added Measurement of teacher effectiveness. Nevermind all the other factors affecting kids during their incredibly complex development and education. Consider if it is possible that students who already do better on tests are more likely to find success in school, get into a better college and eventually get a higher paying job. Does evidence suggest students from higher socioeconomic levels do better on tests, thus better in school, thus generally earn more money than their peers? Do the student goals differ? College admission is a goal. But who do we hold accountable when goals are not met?
The problem is not necessarily with the research itself, it is how it will probably be used. I can foresee this evidence used as rationale or justification for an increased emphasis on the validity of Value-Added Teacher Evaluations. And those teacher evaluations will rely disproportionately on student data from testing. Decision makers and politicians beholden to the appearance of taking action and doing something in our perceived education crisis will likely fail to make reasonable changes from such research and instead use it to justify a call for kneejerk and potentially harmful changes. They do not mean harm they just lack sufficient understanding of all that is involved in education.
Numerous videos are included along with the New York Times article and they do much to reinforce the notion that our schools are failing. I am increasingly frustrated by media and their lack of objectivity on education. Instead of presenting a balanced view of reality, they(and NBC) fall prey to the gloom and doom model to attract attention and readers. This undermines public confidence in our schools and has become a self fulfilling prophecy. The video at one point references the low grades the public assigned when asked to grade our public schools to illustrate this point.
If using such data driven decisions were a sound approach then we should follow suit with other public institutions. Shall we start with our political ones and remake them all in a flurry of reform? I suspect that course would meet greater and more organized resistance and be deemed unwise. The video continues on and mentions that among teachers “there's growing frustration that those skills can't be measured by a test. standardized tests are an accurate reflection of a student's achievement. 60% say those tests determine what they teach.” Subjective(using real people) as a component in measuring things isn't flawed enough to justify swinging the pendulum too far the other way. Teachers know that. If they didn't they'd make course recommendations solely based on how kids score on a test or only assign grades based on tests.
The increasing role of data in teacher hiring, retention and evaluation does something that few other human endeavors do. Rely on data more than people. The problems with VAM(Value Added Models) in such a process is described as either smart or dumb by Bruce Baker (a guy way smarter than anyone at TU)who said there were 3 main flaws with this approach. You don't even need to understand what he's saying to figure out he seems to suggest flaws with VAM.
- The first error is a deterministic view of a complex and uncertain process.
- The second common error becomes apparent once the need arises to concretely measure quality
- The third error is a belief that important traits are fixed rather than changeable
- The difference cited in a lifetime amounts to $4,600. Over 20 years that's about $225 a year, $19 a month, $4.75 a week, or less than a dollar a day. What if a student had a great teacher but chose a more service oriented profession with less potential for earnings...hmmm? Let's take for example...maybe a job like...TEACHING! Economists would be the ones to qualify worth solely by income. Hearts of stone those folks.
- Kids with good teachers have a .5% greater chance of going to college. So if a bad teacher taught 200 kids and an good teacher taught 200 kids, the good teacher would send 1 more on to college.
- A classroom with $266,000 increase in career earnings. If I taught a class of 30 kids who worked for 30 years that'd be about $295 difference for each of them.
- Robert H. Meyer of the Value-Added Research Center is quoted as saying “That test scores help you get more education, and that more education has an earnings effect — that makes sense to a lot of people.” The problem with that is clear to an educator. A system that relies too heavily on testing in determining the fate of our kids. Most of the nations(Finland for example) that outperform the United States on international tests do not share this test heavy approach.
- The link between teacher performance and student test scores while statistically proven, is not ironclad. Using this data in such a way has the potential to undermine the collegial and supportive professional environment among teachers and disrupt and discourage peer support. The effect would hurt all students and counteract any gains, real or perceived. In short it won't matter who you hire, it will undermine our profession.
- “The message is to fire people sooner rather than later,” Professor Friedman said. WTF? So a new teacher with less experience who needs time to develop as a professional and master their craft should be fired? What about the teacher who is asked to teach a different curriculum each year? One who is stricken with illness for a lengthy period of time health problems? That seems like sound reasoning... huh? The way to strengthen education is to fire people. Did you hear that message? In other words...blame the teachers.
- Is it possible as suggested by someone who questions the validity of such research that value added is simply the only financially practical way to tell the difference between teachers? "Observations or videotapes of classroom practice, teacher interviews, and artifacts such as lesson plans, assignments, and samples of student work" are all financially prohibitive as they'd take too much time and money to effectively implement. To me it is simple...you know a good teacher when you walk in their room...and yes that is a subjective measure. But so is measuring learning. Standardized tests are more objective but we'd be foolish to place any more weight on them than we do already.
- There is another group who has growing influence on education policy I am wary of, Pyschometricians. They contend that a test is only valid if it actually measures what they are supposed to. I haven’t seen a test, nor would I want to, that can measure how good a teacher someone is.
- Whether it is John Keynes or Adam Smith, economics is a "dismal science" that essentially amounts to theory. Kinda like education theory. I read some of the comments on the article and they seemed more soundly based on the real world.
- Are similar data heavy measures applied to similar things? Like: Our curriculum, online classes, charter schools, would they be welcome in private schools since education is education ...public or private? Or could the same conclusion be drawn from how far back a kid sits in a classroom, how fast they finish a test, or whether or not they're a student-athlete?
- "But controlling for numerous factors, including students’ backgrounds, the researchers found that the value-added scores consistently identified some teachers as better than others, even if individual teachers’ value-added scores varied from year to year." Anyone bother asking why it varied?
The study simply confirms what we already knew. The question before us is how or if that is useful. Let me be the 10,000th person to tell you that over-representing the value(pun intended) of Value Added is unwise. We have begun to employ this approach across the nation in a sweeping tide that shows little sign of turning back. We've seen the damage such a tide can do when it advances too far unchecked. What is even more frustrating is we seem to be spending more time, money and resources to develop, justify and advance these methods all for what at can at best be described as a minimal return. Thus pushing the tide even farther and doing untold damage.
So the study found out that teachers matter. Teachers matter a lot and all this data shouldn't. Perhaps a study showing parents matter would be equally useful. Allow me to briefly respond to the research after what has grown into a lengthy post. "Well ...Duh!" I'll restate what I find the most fault with about all of this, it is that data driven reform attempts to replace what throughout history has been the skilled art of teaching with some sort of exact science. In our effort to continually educate and develop the human mind we are forgetting we still dealing with people and we cannot do the job alone. Funny thing about people and their behavior is that more often than not they find ways to defy scientific explanation.
Value-added is an oxymoron if ever there was one.