‘What do Cadbury’s Creme Eggs and dropped bananas have to do with empirical research in schools?’ I hear you ask.
The first problem that education researchers have is controlling all the different things that can affect their data. In a complex social system such as a school, there are hundreds of events happening simultaneously. This means that it’s difficult to untangle the effects of a single intervention on pupils’ learning. What’s more, even the smallest incident may affect the overall results. I’d like to call this The Dropped Banana Effect, after an experience I had during my own education research project.
For my master’s thesis, I worked with schools in London and Manchester to examine how technology affects attainment and interest in French with Key Stage 3 pupils. On each visit, I took large wholesale boxes of Cadbury’s Creme Eggs as incentives for the pupils, so that I’d have enough participants for the necessary questionnaires and writing and speaking tasks. In reality, these turned out to be quite distracting for some pupils, who gazed hungrily over my shoulder at the box of chocolate, ignoring my hopeful linguistic prompts.
On one visit to a large community school, I was waiting outside a classroom for my next group of participants. The bell rang, and a single year 7 rushed out of his lesson. A banana tumbled from his rucksack and bounced onto the floor. He didn’t notice.
I moved to pick it up, but suddenly three classes of teenagers burst out of their classrooms towards me. The banana was kicked, then trampled. Pupils started screaming and a few slipped and fell. Mush flew in the air and spread on shoes and clothes. Someone grabbed the banana skin and tried to shove it down a friend’s jumper. Anxiously, I looked around for a responsible adult, and wondered if that was supposed to be me.
Suddenly, three teachers with walkie-talkies appeared. My heart sank as the loudest screamers (about half of my research sample) were lined up against the wall, and thoroughly told off for poor behaviour in corridors and disrespecting the cleaning staff.
Five minutes later, les miserables slumped into my classroom, mashed banana in their hair and on their uniform. Cautiously, I announced the French writing task. The timing could not have been worse. Understandably, many of the sample were not in a co-operative mood and left their papers blank, protesting against the perceived injustice of adults. Even chocolate rewards couldn’t overcome such strong resistance.
Although I didn’t particularly blame them, I now faced the ethical dilemma of how to report my results. Could I claim that my data was an accurate representation of these pupils when half of them had been furiously angry at the time of data collection? Would I need to repeat the task? There was no time, as the pupils had to get back to their normal curriculum. Perhaps I should put in a disclaimer? I thought of my supervisor, a highly-respected academic, and didn’t think he would appreciate an account of the farcical banana incident.
In social science research, it’s notoriously difficult to get a standardised sample. The home learning environment, parents’ qualifications and even birth weight have been shown to have a significant effect size on education outcomes. This means that even the most carefully chosen sample will be imperfect. What’s more, children are human and don’t always feel or behave the same way. Some will be hungry, angry or tired, and therefore be more or less engaged in the task during data collection. It’s incredibly difficult to control for factors like that. By chance, I happened to witness the cause of my participants’ frustration. Had it happened in another corridor instead, I might have reached a different conclusion about how they’d done on the tasks I’d set.
So what does this mean for the evidence that we get from research? Bigger studies with higher budgets can reduce the impact of this kind of monkeying around on their overall conclusion. Bigger sample sizes or even randomised control trials (RCTs) allow researchers to separate out the anomalies from general trends in data. Yet teachers may still find that the outcome of an intervention changes completely with a different bunch of pupils.
It’s great that schools are looking to research for inspiration from what has worked well elsewhere. However, although evidence is appealing, it is not the same as completely reliable fact. Research in schools can only suggest which outcomes are likely, rather than certain. To interpret it otherwise would simply be … well, bananas!
To help you out of tricky spots in your school, we’ve created a guide to evidence-based practice. Members of our school leader service can read it here. It doesn’t mention Creme Eggs – but don’t let that put you off.