September 24, 2022

The Editor Speaks: Invisible learning may become visible

Pin It

Learning, if one reads our media (and I include this publication) has seemed invisible at some of our schools, especially government run ones, and it has been with interest to see how soon our new Minister for Education Juliana O’Connor-Connolly is going to try to deal with it.

With a teaching background, we at least have a Minister who knows what it is like to try toeducate pupils.

It is with some silent applause I give the Minister for trying something new – Visible Learning.

The Ministry of Education organised a seminar introducing Visible Learning.

O’Connor-Connolly said, “Children are at school for over 7 hours a day and we need to be able to make use of the most effective methods that keep our students on a positive track. Visible Learning is going to allow educators to be more effective and efficient by using tools that achieve the greatest impact. This seminar is only the first step and there is more to come. ”

What is Visible Learning?

In a 2008 meta-study, John Hattie popularized the concept of visible learning.[1]

Hattie compared the effect size of many aspects that influence learning outcomes in schools and points out that in education most things work. The question is which strategies and innovations work best and where to concentrate efforts in order to improve student achievement. The Times Educational Supplement described Hattie’s meta-study as “teaching’s holy grail”.[2]

According to Hattie’s findings, visible learning occurs when teachers see learning through the eyes of students and help them become their own teachers. Hattie found that the ten most effective influences relating to student achievement are:[1]

Student self-reporting grades (d= 1.44)
formative evaluation (d=0.9)
teacher clarity (d=0.75)
reciprocal teaching (d=0.74)
feedback (d=0.73)
teacher-student relationships (d=0.72)
meta-cognitive strategies (d=0.69)
self-verbalisation/ questioning (d=0.64)
teacher professional development (d=0.62)
problem-solving teaching (d= 0.61).

SOURCE: Wikipedia

However, like everything well intentioned, John Hattie has his critics. The most notable one is from the McGill Journal of Education Vol 52, No 1 (2017), where a statistician takes apart almost everything that Hattie has included in his book as scientific data.

The statistician states:

“The basic idea behind Hattie’s research, that is, to identify “what works best in education” using scientific data, is not bad in and of itself. The desire for rigor and concrete data is essential in order to describe the impact of measures on teaching and learning. Hattie draws from meta-analyses, which are relatively complex statistical methods frequently used in, among many other fields, medical and health research. The size of his synthesis appears impressive: over 800 meta-analyses, comprising over 50,000 studies and millions of individuals. Starting with over 135 effect sizes, it seems capable of measuring a wide array of interventions with the potential to improve learning. Hattie is not afraid of numbers, which is apparently not that common among researchers in the field of education; this therefore gives the appearance of scientific rigor to his work. Consequently, for a statistician, this seems like a very good start.

“Unfortunately, in reading Visible Learning and subsequent work by Hattie and his team, anybody who is knowledgeable in statistical analysis is quickly disillusioned. Why? Because data cannot be collected in any which way nor analyzed or interpreted in any which way either. Yet, this summarizes the New Zealander’s actual methodology. To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigor. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes willful blindness.

Methodological errors

Fundamentally, Hattie’s method is not statistically sophisticated and can be summarized as calculating averages and standard deviations, the latter of which he does not really use. He uses bar graphs (no histograms) and is capable of using a formula that converts a correlation into Cohen’s d (which can be found in Borenstein, Hedges, Higgins, & Rothsten, 2009), without understanding the prerequisites for this type of conversion to become valid. He is guilty of many errors, but his main errors correspond to two of the three major errors in science cited by Allison, Brown, George, and Kaiser (2016) in Nature:

1. Miscalculation in meta-analyses

2. Inappropriate baseline comparisons”

SOURCE: https://www.mycota.ca/pro-d-blog/2017/09/05/a-criticism-of-john-hattie%E2%80%99s-arguments-in-visible-learning-from-the-perspective-of-a-statistician/

The article ends with, “In summary, it is clear that John Hattie and his team have neither the knowledge nor the competencies required to conduct valid statistical analyses. No one should replicate this methodology because we must never accept pseudoscience. This is most unfortunate, since it is possible to do real science with data from hundreds of meta-analyses.”

If one even forgets the statistical errors that the above lists, another article on the Literacy in Leafstrewn website titled “Can we trust educational research? (“Visible Learning”: Problems with the evidence)” has this warning:

“I’ve been reading several books about education, trying to figure out what education research can tell me about how to teach high school English. I was initially impressed by the thoroughness and thoughtfulness of John Hattie’s book, Visible Learning, and I can understand why the view of Hattie and others has been so influential in recent years. That said, I’m not ready to say, as Hattie does, that we must make all learning visible, and in particular that “practice at reading” is “minimally” associated with reading gains. I discussed a couple of conceptual issues I have with Hattie’s take in an earlier post–I worry that Visible Learning might be too short-term, too simplistic, and less well-suited to English than to other disciplines. Those arguments, however, are not aimed at Hattie’s apparent strength, which is the sweep and heft of his empirical data. Today, then, I want to address a couple of the statistical weaknesses in Hattie’s work. These weaknesses, and the fact that they seem to have been largely unnoticed by the many educational researchers around the world who have read Hattie’s book, only strengthen my doubts about the trustworthiness of educational research. I agree with Hattie that education is an unscientific field, perhaps analogous to what medicine was like a hundred and fifty years ago, but while Hattie blames this on teachers, whom he characterizes as “the devil in this story” because we ignore the great scientific work of people like him, I would ask him to look in the mirror first. Visible Learning is just not good science.

“The most blatant errors in Hattie’s book have to do with something called “CLE” (Common Language Effect size), which is the probability that a random kid in a “treatment group” will outperform a random kid in a control group. The CLEs in Hattie’s book are wrong pretty much throughout. He seems to have written a computer program to calculate them, and the computer program was poorly written. This might be understandable (all programming has bugs), and it might not have meant that Hattie was statistically incompetent, except that the CLEs Hattie cites are dramatically wrong. For instance, the CLE for homework, which Hattie uses prominently (page 9) as an example to explain what CLE means, is given as .21. This would imply that it was much more likely that a student who did not have homework would do well than a student who did have homework. This is ridiculous, and Hattie should have noticed it. But even more egregious is when Hattie proposes CLEs that are less than 0. Hattie has defined the CLE as a probability. A probability cannot be less than 0. There cannot be a less than zero chance of something happening (except perhaps in the language of hyperbolic seventh graders.)

As my statistician friend wrote me in an email, “People who think probabilities can be negative shouldn’t write books about statistics.”

SOURCE: http://literacyinleafstrewn.blogspot.com/2012/12/can-we-trust-educational-research_20.html

None of the above makes me want to change my initial observation, “It is with some silent applause I give the Minister for trying something new – Visible Learning.”

However, we also need to know where it is weak so we can make it strong and VISIBLE.

Please see iNews Cayman’s Front Page story today “Cayman Islands Educators Seminar 2017-2018 on Visible Learning”.

Print Friendly, PDF & Email
About ieyenews

Speak Your Mind

*