It is so tempting to try to apply cognitive science results in education. It seems like an obvious step on the long road of moving education from a field of theory and philosophies to one more grounded in empirical research. Yet, learning myths are persistent. Even scarier, “those who know the most about neuroscience also believe the most myths.”
Educators may have the best intentions when trying to infuse their practice with evidence, but they all too often are not equipped as critical consumers of research. Worse, the education profession has historically been wrapped in “thoughtworld” 1, where schools of education have taught same ideas about effective teaching and learning for decades without a basis in empirical research. These same ideas are taught to principals, district administrators, and teachers, so nary a critical voice can stop the myths from being repeated and mutually reinforcing each other.
Effectively conducting empirical research, translating research for policymakers, and implementing research-based program design is my job. I came to education purely from a research and policy perspective, and I am equipped to understand some of the empirical research done on effective schooling 2.
I have to confront an awful history of “outsiders” like myself who have brought round after round of poorly supported, poorly evaluated reforms. I have to confront the history of districts and schools discarding some very effective programs because of leadership changes, lack of resources, and most of all a lack good, systematic evaluation of programs. And I have to be damn good at what I do, because even a small misstep could paint me just like every other “expert” that has rolled through with the newest great idea.
I think this is why I tend to favor interventions that are very small. Simple, small, hard to “mess up” interventions, based in research, implemented just a few at a time have tremendous potential. I love the oft-cited work on filling out the FAFSA along with tax filing at H&R Block. It is simple. There is no fear of “dosage” or implementation fidelity. There are both sound theoretical reasons and empirical results from other domains that suggest a high likelihood of success. It has the potential to make a huge impact on students without adding any load to teachers who are, say, implementing a brand new and complicated curriculum this year. This is how you earn trust through building success.
I am also a fan of some really big, dramatic changes, but how I get there will have to be the subject of a future post.
E.D. Hirsch’s term ↩
In the area of neuroscience and cognitive science, I am probably only marginally better off than most teachers. My Sc.B. is in chemistry. So a background in empirical physical sciences and my knowledge of social science may help me to access some of the research on how people learn, but I would probably be just as susceptible to overconfidence in my ability to process this research and repeat untruths as many very intelligent educators. ↩
My analysis on Nesi’s Notes depended entirely on the National Center for Education Statistics’ Common Core Data. The per pupil amounts reported to NCES may look a bit different from state sources of this information. There are several explanations of this. First, the enrollment counts used to generate per ...
Goldstein’s second point is worth highlighting:
Anyway, in a small school, large-scale research isn’t the key determinant anyway. The team’s implementation is.
On the same day that Shanker Blog ...
A bit of a mea culpa, where I share smarter, more nuanced, and more informed takes than what I offered on this blog on a recent TNR article about Diane Ravitch.