Last summer, there was an eruption of concern among schools that the GCSE English exam had suddenly been made harder by a change in grade boundaries. Ofqual, the exams regulator whose job it is to keep exams equally easy in all years, certainly intervened: what is not clear is if it got it right, or whether it made it too difficult.
A judge is considering whether the boundary-setting was conducted via a fair process. But we now have some data with which to look at the issue from the National Pupil Database. I have GCSE English (or English language) results and each candidate’s scores at the age of 11 (although not which exam they took, nor their exam board*).
Since the aim of boundary-setting is to keep exams equally difficult, and since Ofqual believes the school system has not improved, we can use these two results together to tell us something: similarly able children at the age of 11 should get roughly the same grade in 2011 and 2012. There are horribly complex ways to do this formally, but I am going for an intuitive method. Read more >>
At the moment, the Department for Education is considering changes to the league tables and the exam system. This seems an opportune moment to make a simple point about qualification-awarding and accountability: English school examinations are subject to measurement error in a really big way.
Here is a simple thought experiment to flesh it out. Imagine a class of 100 students. Let us specify that each one has a “true” ability that means that one pupil should get one point, one pupil should get two, one should get three and so on – up to 100 marks. Now, let’s award this class one of 10 grades: 90+ gets you an A, 80+ a B and so on.
Let us assume that the tests are perfect. If that were the case, you would get ten individuals in each grade. Easy enough. But what happens if we start introducing errors into the test? We can do that with a set of exotically named (but very simple) “Monte Carlo” estimates, which I calculated using this simple spreadsheet. Read more >>
Last week, I went to Wolverhampton where I spoke at a local debate, organised by the university and Pat McFadden, the local MP, about the local authority’s school. I was the warm-up act for Lord Adonis, former schools minister, setting the scene about the city’s education system before his talk on lessons on school improvement.
It was interesting event – and the city is clearly considering its future and the role of education within it. There is – judging by my inbox – serious and deep interest in improving schools in the city. One of the things I sought to do was set out Wolvo’s position in relation to the rest of the country – and what statistics about the city tell us.
Here is my presentation: Read more >>
On Thursday afternoon, journalists were taken into the basement of a Westminster building, fed chicken satay and walked through Ofqual’s report on the recent English GCSE. During the summer, a late shift in grade boundaries shocked schools, leaving many high-flying schools with significantly worse results than they had been expecting.
The most striking outcome of the Ofqual research is that it seems to find evidence of cheating. It is incidental to the main purpose of the review, which was to ask whether the shift in the grade boundaries was correct. But it’s a stunning – and quite clear – finding.
Here is the issue: English GCSE can be taken in such a way that the pupil has done everything except for teacher-marked “controlled assessments” in the final months. If they do that, the teachers know what marks each pupil needs. And teachers give those marks.
In the graph below, Ofqual have worked out how many marks candidates needed from their teachers to get a C. If they got a mark to the right of the red vertical line, the teacher gave them a high enough grade to get the C. The shape of that distribution is, frankly, a sign of something horribly wrong. Teachers are massaging marks.
Read more >>