Today, I gave a brief presentation – based on our previous stories – on the performance of London schools to the excellent Centre for London. Some slides are a little mysterious without my burbling over the top, but I hope it’s understandable enough.
This week, I have written a fair amount about England’s schools, and how well the capital does. I thought that today, I would publish some data that will help explore some finer differences: how well do children do at a borough level?
Below the fold, I have worked out the FT score for each child (a score based on their performance in English, maths and three other GCSEs). I then ran a regression through the data, which predicts performance based on background and by local area.
This is, in effect, a similar exercise to the one in benchmarking school systems, and has all the same caveats. But this time around, the objective is to get a steer on how levels of attainment vary in different boroughs for an individual child of similar social circumstances. Read more
Last summer, there was an eruption of concern among schools that the GCSE English exam had suddenly been made harder by a change in grade boundaries. Ofqual, the exams regulator whose job it is to keep exams equally easy in all years, certainly intervened: what is not clear is if it got it right, or whether it made it too difficult.
A judge is considering whether the boundary-setting was conducted via a fair process. But we now have some data with which to look at the issue from the National Pupil Database. I have GCSE English (or English language) results and each candidate’s scores at the age of 11 (although not which exam they took, nor their exam board*).
Since the aim of boundary-setting is to keep exams equally difficult, and since Ofqual believes the school system has not improved, we can use these two results together to tell us something: similarly able children at the age of 11 should get roughly the same grade in 2011 and 2012. There are horribly complex ways to do this formally, but I am going for an intuitive method. Read more
At the moment, the Department for Education is considering changes to the league tables and the exam system. This seems an opportune moment to make a simple point about qualification-awarding and accountability: English school examinations are subject to measurement error in a really big way.
Here is a simple thought experiment to flesh it out. Imagine a class of 100 students. Let us specify that each one has a “true” ability that means that one pupil should get one point, one pupil should get two, one should get three and so on – up to 100 marks. Now, let’s award this class one of 10 grades: 90+ gets you an A, 80+ a B and so on.
Let us assume that the tests are perfect. If that were the case, you would get ten individuals in each grade. Easy enough. But what happens if we start introducing errors into the test? We can do that with a set of exotically named (but very simple) “Monte Carlo” estimates, which I calculated using this simple spreadsheet. Read more
Last week, I went to Wolverhampton where I spoke at a local debate, organised by the university and Pat McFadden, the local MP, about the local authority’s school. I was the warm-up act for Lord Adonis, former schools minister, setting the scene about the city’s education system before his talk on lessons on school improvement.
It was interesting event – and the city is clearly considering its future and the role of education within it. There is – judging by my inbox – serious and deep interest in improving schools in the city. One of the things I sought to do was set out Wolvo’s position in relation to the rest of the country – and what statistics about the city tell us.
Here is my presentation: Read more
On Thursday afternoon, journalists were taken into the basement of a Westminster building, fed chicken satay and walked through Ofqual’s report on the recent English GCSE. During the summer, a late shift in grade boundaries shocked schools, leaving many high-flying schools with significantly worse results than they had been expecting.
The most striking outcome of the Ofqual research is that it seems to find evidence of cheating. It is incidental to the main purpose of the review, which was to ask whether the shift in the grade boundaries was correct. But it’s a stunning – and quite clear – finding.
Here is the issue: English GCSE can be taken in such a way that the pupil has done everything except for teacher-marked “controlled assessments” in the final months. If they do that, the teachers know what marks each pupil needs. And teachers give those marks.
In the graph below, Ofqual have worked out how many marks candidates needed from their teachers to get a C. If they got a mark to the right of the red vertical line, the teacher gave them a high enough grade to get the C. The shape of that distribution is, frankly, a sign of something horribly wrong. Teachers are massaging marks.
Your birthday matters: children who are older when they start school as 4 year-olds outperform their peers. This is not a small effect, nor does it peter out as they get older. We can spot it easily at the national level among 16 year-olds. Read more
An article in the TES, an education magazine, has caused some consternation – and rightly so. In a comment piece, written by a teacher, the author appears to describe being irritated at a child who is determined to get an A grade rather than a B at A-level.
That is not what the government wants this teacher to be doing. We can tell that from the incentives that this sixth-form teacher faces. The author works at a sixth form college, and if that child fails to get an “A”, it will show up in his college’s results. Sixth forms are ranked on the average grade attained by their students, and pushing a kid from a B to an A shows up in the school point score.
Were this teacher teaching a 16 year-old, however, his behaviour would be perfectly rational. The central measure for schools is the proportion of children getting passes of a C or better in five full GCSEs including English and maths.
Both regulation and league tables drive focus on that measure. There are buckets of data that reveal schools which are particularly focussed on that borderline, but as long as schools do well enough in the core measure, heads can safely ignore everything else.
As Graham Stuart, Tory chair of the education select committee has said, this measure offers no reward for pressing a child to move from a C to an A. It is rational for teachers to focus on getting children over the D/C borderline.
This measure also creates problems for those of us who follow DfE statistics. Read more