The Learner Focused Learning Blog

May 24, 2024

“Evidence-based” practice isn’t having the effect we were hoping for - what next?

May 24, 2024

Learner Focused Learning

Fergus McShane

This is part one of a two-part series on understanding school contexts from a socio-cultural point of view. Follow us on LinkedIn, Instagram, or Facebook to be notified when the second part is released!

In a recent article, Sally Riordan has been examining the effects of so called “evidence based” education. Why the quotation marks? Because in education circles to be “evidence based” has become a byword for “supported by a randomised controlled trial”. And, as the evidence on this approach gets borne out, we’re beginning to see that being “evidence-based” is not, at the very least, as easy as it seems. In this article, we look at why our approach to evidence, including RCTs and other “positivist” frameworks borrowed from the natural sciences, isn’t working and what’s missing from the foundation of our “evidence-base”. In the next part of this two-part series, we’ll look at how other theoretical frameworks might be able to fill in the gaps too.

What is a randomised controlled trial and why have they become so prevalent?

In the words of Dr Ben Goldacre, the co-author of a 2013 report released by the DFE that advocates for their expanded use in education, a randomised-controlled trial is where:

“We simply take a group of children, or schools (or patients, or people); we split them into two groups at random; we give one intervention to one group, and the other intervention to the other group; then we measure how each group is doing, to see if one intervention achieved its supposed outcome any better.”

The ultimate goal is to assess the impact of any given intervention by contrasting outcome data across the two groups. This, in theory, sounds entirely reasonable. Look at the two groups and see which one did better; whether it’s education, medicine, or any other kind of system we need to make a policy on. Indeed, in other fields such as medicine, they have become ubiquitous and are often seen as the “gold standard” of evidence. Why then, as Riordan has found, does enacting a similar approach in education yield a much murkier picture?

The answer is not a straightforward one. And, in fact, perhaps there is no “one” answer. And in fact perhaps looking for the “one answer” is the problem in the first place. To get to the bottom of this, we need to examine what it means to “create evidence” and be “evidence-based” in the first place; and be self-critical about what’s missing from that approach. Once we’ve done that, we need to be willing to re-open what it means to be “evidence-based” based and move towards a more comprehensive model of research-driven education.

How does “social” science differ from “natural” science; in other words what’s wrong with randomised controlled trials?

In education, the preference for and prevalence of randomized controlled trials (RCTs) and other positivist methodologies highlights a fundamental misunderstanding of what it is we’re trying to understand when we look for “what works” in education systems.

Another way to think of “positivism” is as the “scientific method”. We’ll call it “positivism” here because we need to be able to differentiate between two different ways of doing “science”. Positivism seeks to discover universal truths through objective measurement and controlled experimentation. You’ve used “positivism” when you did experiments in science at school - come up with a hypothesis, identify the variables, control for all but one of them, and see the effect of modifying that variable on the experiment. That approach, however, makes fundamental assumptions about the ability to control the context and identify variables. In other words - it makes the assumption that it is possible to create a “laboratory environment” in which all variables are controlled for. 

So what’s different about social systems? With social systems the assumption that all variables can be controlled for cannot be made. Social systems are simply too large and too complex to be able to create the “laboratory” conditions required to effectively identify and control for variables. In order to account for the size and complexity of social systems, we need a new epistemological framework - in other words a new way of understanding systems, creating knowledge, and making interventions. We need an approach to creating evidence, one that is built specifically to understand social systems, rather than a borrowed framework that was designed for understanding physical systems.

What do we need to account for in this new framework? In short, we need to account for the extra dimension social systems operate with that positivist approaches find difficult to account for. In the physical world we’re used to tracking the effects of clearly defined objects, with easily identifiable and measurable properties, in three dimensions of space and a fourth of time. We do still need that in our new framework, and when we begin to look at the assemblage framework in part two of this series we’ll call it “materiality”, in order to account for the physical arrangement of physical objects in our social system. We also, however, need to go a step further. 

Social systems are typified by networks of exchanged agency, symbols, and affect. In other words, the way that we understand something when we see it is not transmitted through the physical dimensions but rather through a social one. When we look at the operation of the classroom, for example, we don’t just witness it but we link what we see to our knowledge and experience; we interpret it. That new interpretation is built up of a series of experiences and interpretations cultivated across time. The physical transmission of the sounds and lightwaves does not encode the information we receive, the knowledge we link it with, or the conclusions that we draw from interpreting it. Therefore we need a framework that accounts for this “extra dimension” in which social meaning is created, exchanged, and interpreted. One that can account for and capture the emergent and dynamic processes and properties that make up the social system we are observing. 

The last thing we need to account for in our framework is the fact that we’re part of the very thing we’re trying to measure. We perceive and understand the world from our vantage point within it and that changes the way we understand it. We are measuring the system from within, not from the outside as one would in a laboratory experiment. 

When we work through all of this we see that positivist approaches, which aim to isolate variables and establish cause-and-effect relationships through controlled conditions, are simply not complex enough to capture the rich, emergent, and dynamic depth that social systems operate with. They struggle to capture comprehensive data on these complex social networks and, when they can, they struggle to turn it into the “objective” (i.e free of external influence) data that is needed to draw universal truths and conclusions that the approach seeks. 

Why then, the persistence of the positivist approach in education? I think that Biesta is onto something in his paper on ‘values’ vs ‘evidence’ driven education. That we have, since Copernicus and Galileo displaced the earth from the centre of the solar system, sought to turn the world into understandable “fact producing” machines regardless of what’s lost from abstracting away the human element. This process of “metrology” has yielded huge technological advances and so we continue to apply it where it works and, somewhere along the way, began applying it where it doesn’t too. Beyond that, the reason we are where we are with evidence-based research is probably less important than knowing that we are where we are and that we need to find somewhere to go next. When working with social frameworks and social contexts definitive answers are hard to find and often the “close enough to be useful” answers can suffice; at least for those of us outside academia.

What then are the implications for “evidence-based” practice? We need to start by deciding what the goal should be. Riordan’s research is showing us that our current approach, looking for generalisable and uniform rules that explain what happens and “what works” and then rotely applying the results, isn’t working. If we are to not give up on learning from each other to support our students, we need to agree on a new goal: to shift our focus away from “let’s find ‘what works’ so everyone can do ‘what works’” to helping educators to answer the question “what might work to achieve X in this situation based on my understanding of this context?”. If we find “what works” in every context by accident through our research, great, but the goal should be to inspire the thought that helps teachers and educators to map and understand their local assemblage and, with that understanding, make interventions that achieve desired outcomes that help our students to reach their vision of success.

Where next for “evidence-based” practice?

The challenge to enabling educators to map their own context is to not retrench into an “us vs them” battle between “the positivists” and “the social constructivists”; to be clear the two are not at odds and can complement each other. Rather, we need to expand our understanding of what can count as “evidence”; to be open minded enough to enable us to draw from a wide variety of sources to better understand the contexts of our school and the context in which that school operates. 

Rather than limiting “evidence” to what can be derived from RCTs, which abstract away the very complexities and contextual nuances critical to understanding educational outcomes, we should broaden our idea of what makes good evidence. This might include valuing qualitative data, case studies, ethnographic research, and other methodologies that can provide deeper insights into how educational interventions interact with the complex social fabric of schools and communities. Research methodologies that re-introduce the role of the observer in interpreting, summarising, sharing, and contextualising through their experience. Methodologies that don’t claim to be “objective” but are no less committed to seeking an accurate and usable representation of how school contexts operate.

By reopening the term “evidence” to include a wider array of research methods, but keeping where appropriate data driven and positivist approaches, we can better understand and respond to the multifaceted and context-dependent nature of education and work toward the goal of building learning systems that work for every learner.

In part two of this series, we’ll look at what some of those methodologies look like in practice starting with Deleuze’s theory of Assemblage.

A button to take the user back to the top of the page.