Garbage in, garbage out – is this the reality of school data?
If Charles Babbage were alive today, he’d probably be living in Silicon Valley, wearing a grey hoodie to work, and talking about moving fast and breaking things.
However, as he was born in 1791, not 1991, he had to content himself with planning the world’s first computer: an “Analytical Engine”, which, if it had been built to his specification, would have had the equivalent of 675 bytes of memory, and a clock speed of about 7 Hz.
Think of that next time you are complaining about the time it takes for YouTube to load.
Inputs and outputs
Babbage did manage to build a “Difference Engine”, which was a kind of highly sophisticated calculator. When he showed off this machine to his friends in the Houses of Parliament, he got some interesting comments.
On two occasions I have been asked, — “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
Computing has advanced enormously since Babbage’s day, but one thing hasn’t changed: we still haven’t invented the machine that gives you the right answer when you put in the wrong numbers. (Some cynics might argue that the composition of the Houses of Parliament hasn’t changed much either).
Modern computer engineers have a less polite way of expressing the same idea: GIGO, or garbage in, garbage out.
And this is a problem that educational assessment also suffers from. Many schools have quite sophisticated tools for analysing pupil data. But unless those data are accurate, all our insights count for little.
Assessment tasks that involve extended writing are a particular issue because they are so difficult to mark reliably.
What if there isn’t really a gap between your white boys’ writing and everyone else – it’s just that most of your white boys are in the class of a teacher who is a harsh marker?
What if pupils in history are consistently outperforming compared to other subjects – but it looks like they are underperforming because the teachers are challenging them with tougher assessments?
What if the issue with the trainee whose class is making no progress is not about how she teaches – but about how she assesses?
Less is more
When the results from these types of assessments are put through a system that churns out some impressive graphs, they take on an aura of immutable truth. It can be difficult to remember that underneath the colourful and interactive 3D pie-chart are the exhausted judgements of an NQT hurrying to finish marking a set of books before the next class come barging into the room.
This isn’t helped by the fact that, in many schools, data collections are very frequent, and the pressure to get data into the system on time can further compromise quality.
If we did less frequent summative assessment, it would give us more time to think about what we are trying to assess, to design the right test, and to check the reliability.
And then, if our assessments were more reliable, we’d be able to realise the full value of a powerful data system. The best data systems free teachers from mundane administrative tasks and also allow us to search for insights and patterns that would be too time-consuming to do otherwise.
As Babbage realised two centuries ago, machines that crunch numbers can be transformative, but only if we start with the right numbers.
Daisy Christodoulou is director of education at No More Marking and the author of Making Good Progress? and Seven Myths about Education. She tweets @daisychristo