Chris Cook

Last summer, there was an eruption of concern among schools that the GCSE English exam had suddenly been made harder by a change in grade boundaries. Ofqual, the exams regulator whose job it is to keep exams equally easy in all years, certainly intervened: what is not clear is if it got it right, or whether it made it too difficult.

A judge is considering whether the boundary-setting was conducted via a fair process. But we now have some data with which to look at the issue from the National Pupil Database. I have GCSE English (or English language) results and each candidate’s scores at the age of 11 (although not which exam they took, nor their exam board*).

Since the aim of boundary-setting is to keep exams equally difficult, and since Ofqual believes the school system has not improved, we can use these two results together to tell us something: similarly able children at the age of 11 should get roughly the same grade in 2011 and 2012. There are horribly complex ways to do this formally, but I am going for an intuitive method. Read more

Chris Cook

On Thursday afternoon, journalists were taken into the basement of a Westminster building, fed chicken satay and walked through Ofqual’s report on the recent English GCSE. During the summer, a late shift in grade boundaries shocked schools, leaving many high-flying schools with significantly worse results than they had been expecting.

The most striking outcome of the Ofqual research is that it seems to find evidence of cheating. It is incidental to the main purpose of the review, which was to ask whether the shift in the grade boundaries was correct. But it’s a stunning – and quite clear – finding.

Here is the issue: English GCSE can be taken in such a way that the pupil has done everything except for teacher-marked “controlled assessments” in the final months. If they do that, the teachers know what marks each pupil needs. And teachers give those marks.

In the graph below, Ofqual have worked out how many marks candidates needed from their teachers to get a C. If they got a mark to the right of the red vertical line, the teacher gave them a high enough grade to get the C. The shape of that distribution is, frankly, a sign of something horribly wrong. Teachers are massaging marks.

 Read more

Chris Cook

The argument about GCSE English grades continues to boil away. Legal actions are commencing. The attention has uncovered clues that exam reforms over the past few years have, by accident, been more substantial than ministers or officials had intended. The marking system used for the old O-level might have been reintroduced by stealth – and accident.

Here’s why: English exams used to deploy a process called “norm referencing” (or “marking on a curve”). That means that, in effect, you hand out grades depending on their position. In 1963, it was decided that roughly the top 10 per cent of A-level entrants would get an A, the next 15 per cent a B and so on.

Since the 1980s, exams have used “criterion referencing”. That is to say, they say “if you know the date of the Battle of Hastings, that is worth an C. If you know about William the Conqueror’s claim on the throne, you get a B. If you know about Hardrada, get an A…” Under this model, you can have changing numbers of pupils getting each grade.

This graph, from Alan Smithers at Buckingham, shows what happened when England switched from one to the other in the late 1980s.

 Read more

Chris Cook

The Ofqual decision that is all is well on the English GCSE has not been received well by schools. I thought, further to my last post, that it would help understand school leaders’ feelings about this if we took a case study of an excellent school.

I have asked Sally Coates, head of Burlington Danes Church of England Academy – one of the Ark Schools – to explain what she went through last week.  Before you read her account, I thought I would explain why this particular school matters.

Like other Ark Schools, BDA uses “progression” to gauge its success. It benchmarks itself on improving and stretching each child, regardless of the level of their education when they enter. It does not simply attempt to hit the government’s targets.

As a result, BDA expends effort on people who already know enough to get Cs in English, maths and three other subjects – the basket of achievement used by the government to measure school success. This school does not – unlike others – fixate on the C/D line.

This is easy to spot: Ark’s performance rises dramatically when you use a measure that gives schools credit for getting children to higher grades than C. BDA stacked up 24 children last year who managed straight As in English, maths and three other subjects.

Let me be clear,  the school does keep an eye on that grade boundary. Here, indeed, is a photo of the Venn diagram Ms Coates describes below, enhanced with some light photoshopping to make sure it is entirely anonymous.

Children are in a circle showing where they are weak. Each child in each circle gets appropriate tutoring to help drive them up to the line.

But this intervention is only one of a chain of monitoring lines. Children in the “safehouse” are being monitored against higher grades elsewhere. I will return to this, but BDA’s results show a great deal more As and Bs than is normal.

That is why Ms Coates’s anger is so important: despite being focused on progression, not the narrow “C will do” measures used by the government, the school was caught out by the shift in the C/D boundary. Now, over to Ms Coates: Read more

Chris Cook

Over the weekend, Ofqual announced it will examine the English modules that have caused so much concern lately, where many children who expected Cs were given Ds. This will focus on chunks of the new AQA English GCSE and, one assumes, take in the equivalent OCR and Edexcel* qualifications.

This is a very brief blogpost to briefly explain why this matters so much to schools (beyond the fact that they want their pupils to do well). First, we start off with a very simple chart using 2011 data: for each school, I have worked out the share of children passing English, maths and three other GCSEs with a C grade or above.

This measure (the “PACEM” metric) matters: it is the figure that is used to rank schools, and to decide whether they get shut down or not. A school where below 40 per cent of students are below the line is at risk of a forced change of management.

So I have ranked schools on this measure, bundled them into percentiles, and lined them up with the lowest league table position populations are at the left and the best are at the right.

For each percentile of schools, I have published two numbers:

  • The red section shows the share of pupils who passed on the PACEM measure, but only got a C in English. That is to say, pupils for whom a one-grade drop in results means falling below the PACEM waterline.
  • The blue section indicates children who passed with a higher grade in English. The two areas are stacked one on top of the other, so the line marking out the top of the blue section indicates the total pass rate.

 Read more