GCSEs

Chris Cook

This week, I have written a fair amount about England’s schools, and how well the capital does. I thought that today, I would publish some data that will help explore some finer differences: how well do children do at a borough level?

Below the fold, I have worked out the FT score for each child (a score based on their performance in English, maths and three other GCSEs). I then ran a regression through the data, which predicts performance based on background and by local area.

This is, in effect, a similar exercise to the one in benchmarking school systems, and has all the same caveats. But this time around, the objective is to get a steer on how levels of attainment vary in different boroughs for an individual child of similar social circumstances. Read more >>

Chris Cook

Last summer, there was an eruption of concern among schools that the GCSE English exam had suddenly been made harder by a change in grade boundaries. Ofqual, the exams regulator whose job it is to keep exams equally easy in all years, certainly intervened: what is not clear is if it got it right, or whether it made it too difficult.

A judge is considering whether the boundary-setting was conducted via a fair process. But we now have some data with which to look at the issue from the National Pupil Database. I have GCSE English (or English language) results and each candidate’s scores at the age of 11 (although not which exam they took, nor their exam board*).

Since the aim of boundary-setting is to keep exams equally difficult, and since Ofqual believes the school system has not improved, we can use these two results together to tell us something: similarly able children at the age of 11 should get roughly the same grade in 2011 and 2012. There are horribly complex ways to do this formally, but I am going for an intuitive method. Read more >>

Chris Cook

The Ofqual decision that is all is well on the English GCSE has not been received well by schools. I thought, further to my last post, that it would help understand school leaders’ feelings about this if we took a case study of an excellent school.

I have asked Sally Coates, head of Burlington Danes Church of England Academy – one of the Ark Schools – to explain what she went through last week.  Before you read her account, I thought I would explain why this particular school matters.

Like other Ark Schools, BDA uses “progression” to gauge its success. It benchmarks itself on improving and stretching each child, regardless of the level of their education when they enter. It does not simply attempt to hit the government’s targets.

As a result, BDA expends effort on people who already know enough to get Cs in English, maths and three other subjects – the basket of achievement used by the government to measure school success. This school does not – unlike others – fixate on the C/D line.

This is easy to spot: Ark’s performance rises dramatically when you use a measure that gives schools credit for getting children to higher grades than C. BDA stacked up 24 children last year who managed straight As in English, maths and three other subjects.

Let me be clear,  the school does keep an eye on that grade boundary. Here, indeed, is a photo of the Venn diagram Ms Coates describes below, enhanced with some light photoshopping to make sure it is entirely anonymous.

Children are in a circle showing where they are weak. Each child in each circle gets appropriate tutoring to help drive them up to the line.

But this intervention is only one of a chain of monitoring lines. Children in the “safehouse” are being monitored against higher grades elsewhere. I will return to this, but BDA’s results show a great deal more As and Bs than is normal.

That is why Ms Coates’s anger is so important: despite being focused on progression, not the narrow “C will do” measures used by the government, the school was caught out by the shift in the C/D boundary. Now, over to Ms Coates: Read more >>

Chris Cook

Over the weekend, Ofqual announced it will examine the English modules that have caused so much concern lately, where many children who expected Cs were given Ds. This will focus on chunks of the new AQA English GCSE and, one assumes, take in the equivalent OCR and Edexcel* qualifications.

This is a very brief blogpost to briefly explain why this matters so much to schools (beyond the fact that they want their pupils to do well). First, we start off with a very simple chart using 2011 data: for each school, I have worked out the share of children passing English, maths and three other GCSEs with a C grade or above.

This measure (the “PACEM” metric) matters: it is the figure that is used to rank schools, and to decide whether they get shut down or not. A school where below 40 per cent of students are below the line is at risk of a forced change of management.

So I have ranked schools on this measure, bundled them into percentiles, and lined them up with the lowest league table position populations are at the left and the best are at the right.

For each percentile of schools, I have published two numbers:

  • The red section shows the share of pupils who passed on the PACEM measure, but only got a C in English. That is to say, pupils for whom a one-grade drop in results means falling below the PACEM waterline.
  • The blue section indicates children who passed with a higher grade in English. The two areas are stacked one on top of the other, so the line marking out the top of the blue section indicates the total pass rate.

 Read more >>

Chris Cook

One of my grand theories is that public policy types are generally bad at geography. Or, at the least, they underestimate the importance of where you live. Here, below the fold, are two zoomable maps, coloured by the school performance of local state-educated children. The map is based on where the children live, not where they go to school. To explain:

  • The colouring is red for weaker results and blue for better ones. Darker colours mean more extreme results. If you want detail on an area, click on any one of the blobs and it should give you a run down of local statistics, where possible.
  • Both maps are coloured according to FT score results: that is the sum of state-educated pupils’ scores in English, maths and their top three other subjects.    Other data, including official measures, are in the boxes that pop up.
  • On the first map, the geographical blobs are smaller than on previous maps: the lowest super output area in high density places, and the middle-layer output area in zones of low density (this way, we can show maximum detail).
  • That map can be quite frazzling. The second might be more to some people’s tastes. This is exactly the same sort data, just arranged by parliamentary constituency. Since they are bigger lumps, we can include more detailed data.
  • For the constituencies, I have given a barrage of results for all local children in state schools. But also the same just for FSM-eligible children, and for children dubbed “middle attainers” – kids who score in the middle tenth of results aged 11.
  • (NB – Where statistics are missing, it is prevent people combining data sources to work out something about individual children.)

If you want a tour, I’d recommend scrolling along the coasts. Check out some of the coastal towns, and look at the belt of towns and cities between Hull and Liverpool. Also, take a peek at how few dark red areas there are in London. In-borough variation is interesting, too: look at the massive variation within, say, Kent. Read more >>