Dr. Strangelove: Or How I Learned to Stop Worrying and Love Effect Size
I like that title better.
I’ve been intrigued by the concept of effect size for several years. I am not a quantitative person, but I’m curious. I try to keep an open mind, but I still can’t shake a lack of faith in numbers. I try to believe, and sometimes a good quantitative person can move me in their direction just a bit, but I’m still a qualitative guy at heart.
Two weeks ago, our school division hosted its annual “Making Connections” conference and Dr. Matt Haas, assistant Superintendent offered a session titled “You Can Calculate Effect Size.” The fact that many teachers lack basic literacy in research and statistical methods is a detriment to our profession. First, we fail to apply the results of research in the classroom and second, we fail to adequately participate in the conversations around educational research that drives decisions in our divisions, states, and nation.
In a perfect world, education research would be carefully vetted and practitioners could refer to current research from time to time in order to refine their skills. In the world as it is, research on education is often agenda-driven and practitioners too often fall prey to ideas that merely sound good. (Anyone still encouraging students to discover their Learning Styles?)
In the world of the classroom, it would do teachers well to understand at least a little of the methods and language that researchers are using to influence the national conversation on education. Influence that affects universally, such as the movement to use value-added measurements to teacher evaluations. And, influence that affects the classroom in the form of instructional methods teachers are expected to use.
In the absence of any “authoritative body” to filter and condense the growing body of educational research into something productive for American education, teachers need to develop a better understanding for themselves of how to interpret research.
Ready for your first lesson.
Effect Size= “Mean of Data Set Two minus Mean of Data Set One divided by Standard Deviation of Data Set One.”
If you give a pre-test and a post-test, data set one is the pre-test. Data set two is the post-test. Sometime between pre-test and post-test you “apply a treatment.” In the case of education, an instructional strategy. The effect size measures how much difference the treatment made.
If like me, you’re not a number/stats person it’s easy to stop here and pretend that it’s too confusing to waste your time on. This is too important for that, if you didn’t get it read it again. An effect size should tell you how much an instructional strategy facilitated or inhibited student growth. Yes growth (or value-added if you’d rather.)
Take this tool for what it’s worth. It’s the primary tool used by researchers such as Marzano and other education “meta-analysts” to determine instructional methods that work, the techniques have the greatest effect size on student achievement.
Still, the greatest power in using effect size is informative, not prescriptive. For example, Marzano’s well-known book “Classroom Instruction that Works” presents strategies that have proven, through meta-analysis, to have a higher than average effect size on student learning. He does not imply (and even directly states otherwise) that a given strategy WILL work on every student in every situation.
That's likely the greatest flaw with this type of research. What should be informative for our educational practices becomes prescriptive through policies and evaluative methods. I imagine that across the country, more than a few teachers have been evaluated on consistently applying the “strategies that work” without regard to immediate evidence of whether the strategies are working or not, leaving them skeptical and critical of the entire body of work that attempts to isolate the most effective classroom strategies.
This is why all of us, from classroom teachers to legislators enacting policy, should have a better understanding of educational research.