GCSE

Chris Cook

I wrote a piece yesterday on the continued astonishing rise of London’s state schools. One of my brilliant colleagues posed an interesting question: what happens if a child moves into London?

Below, I have published how children who lived outside London at the age of 11 went on to do in their GCSEs (using our usual point score) at the age of 16.

I have divided this set of pupils twice: first, by whether they had moved into London by the age of 16 or not and second by how well they did in standardised tests at the age of 11.

 

Chris Cook

Last summer, there was an eruption of concern among schools that the GCSE English exam had suddenly been made harder by a change in grade boundaries. Ofqual, the exams regulator whose job it is to keep exams equally easy in all years, certainly intervened: what is not clear is if it got it right, or whether it made it too difficult.

A judge is considering whether the boundary-setting was conducted via a fair process. But we now have some data with which to look at the issue from the National Pupil Database. I have GCSE English (or English language) results and each candidate’s scores at the age of 11 (although not which exam they took, nor their exam board*).

Since the aim of boundary-setting is to keep exams equally difficult, and since Ofqual believes the school system has not improved, we can use these two results together to tell us something: similarly able children at the age of 11 should get roughly the same grade in 2011 and 2012. There are horribly complex ways to do this formally, but I am going for an intuitive method. 

Chris Cook

At the moment, the Department for Education is considering changes to the league tables and the exam system. This seems an opportune moment to make a simple point about qualification-awarding and accountability: English school examinations are subject to measurement error in a really big way.

Here is a simple thought experiment to flesh it out. Imagine a class of 100 students. Let us specify that each one has a “true” ability that means that one pupil should get one point, one pupil should get two, one should get three and so on – up to 100 marks. Now, let’s award this class one of 10 grades: 90+ gets you an A, 80+ a B and so on.

Let us assume that the tests are perfect. If that were the case, you would get ten individuals in each grade. Easy enough. But what happens if we start introducing errors into the test? We can do that with a set of exotically named (but very simple) “Monte Carlo” estimates, which I calculated using this simple spreadsheet

Chris Cook

Last week, I went to Wolverhampton where I spoke at a local debate, organised by the university and Pat McFadden, the local MP, about the local authority’s school. I was the warm-up act for Lord Adonis, former schools minister, setting the scene about the city’s education system before his talk on lessons on school improvement.

It was interesting event – and the city is clearly considering its future and the role of education within it. There is – judging by my inbox – serious and deep interest in improving schools in the city. One of the things I sought to do was set out Wolvo’s position in relation to the rest of the country – and what statistics about the city tell us.

Here is my presentation: 

Chris Cook

Last week, the excellent Paul Francis, political editor of the Kent Messenger, reported that Kent, the most significant selective county left in England had come up with a clever plan: to make the entry test for grammar schools “tutor-proof”.

This idea comes up a lot, largely from people promoting selection. You can see why: it is often presented as a means of squaring a problem. They can argue that grammar schools help bright poor children while dealing with the fact that very few get into them.

But, in truth, a properly administered test, which accurately captures the education enjoyed by people at the age of 11, should exclude large numbers of poor children. Not because they are intrinsically less able. But, at 11, the poor-rich divide is already a chasm. 

Chris Cook

On Thursday afternoon, journalists were taken into the basement of a Westminster building, fed chicken satay and walked through Ofqual’s report on the recent English GCSE. During the summer, a late shift in grade boundaries shocked schools, leaving many high-flying schools with significantly worse results than they had been expecting.

The most striking outcome of the Ofqual research is that it seems to find evidence of cheating. It is incidental to the main purpose of the review, which was to ask whether the shift in the grade boundaries was correct. But it’s a stunning – and quite clear – finding.

Here is the issue: English GCSE can be taken in such a way that the pupil has done everything except for teacher-marked “controlled assessments” in the final months. If they do that, the teachers know what marks each pupil needs. And teachers give those marks.

In the graph below, Ofqual have worked out how many marks candidates needed from their teachers to get a C. If they got a mark to the right of the red vertical line, the teacher gave them a high enough grade to get the C. The shape of that distribution is, frankly, a sign of something horribly wrong. Teachers are massaging marks.