Today, I gave a brief presentation – based on our previous stories – on the performance of London schools to the excellent Centre for London. Some slides are a little mysterious without my burbling over the top, but I hope it’s understandable enough.
This week, I have written a fair amount about England’s schools, and how well the capital does. I thought that today, I would publish some data that will help explore some finer differences: how well do children do at a borough level?
Below the fold, I have worked out the FT score for each child (a score based on their performance in English, maths and three other GCSEs). I then ran a regression through the data, which predicts performance based on background and by local area.
This is, in effect, a similar exercise to the one in benchmarking school systems, and has all the same caveats. But this time around, the objective is to get a steer on how levels of attainment vary in different boroughs for an individual child of similar social circumstances. Read more
Last summer, there was an eruption of concern among schools that the GCSE English exam had suddenly been made harder by a change in grade boundaries. Ofqual, the exams regulator whose job it is to keep exams equally easy in all years, certainly intervened: what is not clear is if it got it right, or whether it made it too difficult.
A judge is considering whether the boundary-setting was conducted via a fair process. But we now have some data with which to look at the issue from the National Pupil Database. I have GCSE English (or English language) results and each candidate’s scores at the age of 11 (although not which exam they took, nor their exam board*).
Since the aim of boundary-setting is to keep exams equally difficult, and since Ofqual believes the school system has not improved, we can use these two results together to tell us something: similarly able children at the age of 11 should get roughly the same grade in 2011 and 2012. There are horribly complex ways to do this formally, but I am going for an intuitive method. Read more
Today’s UCAS statistics are pretty grim: the number of people applying to UK universities is falling, and the drops are big. A 6 per cent fall in applications since last year is a big deal.
At the same stage last year, 321,908 people had applied for places. This year, it is 303,861. At the 2011 peak, it was 344,064. These are preliminary results: lots of students are still weighing their options and will apply in the coming months, but it is a big fall. Read more
At the moment, the Department for Education is considering changes to the league tables and the exam system. This seems an opportune moment to make a simple point about qualification-awarding and accountability: English school examinations are subject to measurement error in a really big way.
Here is a simple thought experiment to flesh it out. Imagine a class of 100 students. Let us specify that each one has a “true” ability that means that one pupil should get one point, one pupil should get two, one should get three and so on – up to 100 marks. Now, let’s award this class one of 10 grades: 90+ gets you an A, 80+ a B and so on.
Let us assume that the tests are perfect. If that were the case, you would get ten individuals in each grade. Easy enough. But what happens if we start introducing errors into the test? We can do that with a set of exotically named (but very simple) “Monte Carlo” estimates, which I calculated using this simple spreadsheet. Read more
Last week, I went to Wolverhampton where I spoke at a local debate, organised by the university and Pat McFadden, the local MP, about the local authority’s school. I was the warm-up act for Lord Adonis, former schools minister, setting the scene about the city’s education system before his talk on lessons on school improvement.
It was interesting event – and the city is clearly considering its future and the role of education within it. There is – judging by my inbox – serious and deep interest in improving schools in the city. One of the things I sought to do was set out Wolvo’s position in relation to the rest of the country – and what statistics about the city tell us.
Here is my presentation: Read more
The Treasury’s long-awaited review of the Private Finance Initiative has been released as part of today’s Autumn Statement. It contains some pretty damning findings – and some interesting proposals for the years ahead.
Firstly, it’s ditching the much-maligned name “PFI”. Instead, from now on we will have “PF2″. Get used to it. Here are some of the other interesting points in the report. Read more
The argument about GCSE English grades continues to boil away. Legal actions are commencing. The attention has uncovered clues that exam reforms over the past few years have, by accident, been more substantial than ministers or officials had intended. The marking system used for the old O-level might have been reintroduced by stealth – and accident.
Here’s why: English exams used to deploy a process called “norm referencing” (or “marking on a curve”). That means that, in effect, you hand out grades depending on their position. In 1963, it was decided that roughly the top 10 per cent of A-level entrants would get an A, the next 15 per cent a B and so on.
Since the 1980s, exams have used “criterion referencing”. That is to say, they say “if you know the date of the Battle of Hastings, that is worth an C. If you know about William the Conqueror’s claim on the throne, you get a B. If you know about Hardrada, get an A…” Under this model, you can have changing numbers of pupils getting each grade.
This graph, from Alan Smithers at Buckingham, shows what happened when England switched from one to the other in the late 1980s.