Basically Britain needs Prof. Brian Cox shaping education policy:
“If it were up to me I would increase pay and conditions and levels of responsibility and respect significantly, because it is an investment that would pay itself back many times over in the decades to come.”
Don’t use children as ‘measurement probes’ to test schools
What effect does using school exam results to reform the school system have on children? And what effect does it have on society?
Last autumn Ofqual published a report and their study on consistency of exam marking and metrics.
The report concluded that half of pupils in English Literature, as an example, are not awarded the “correct” grade on a particular exam paper due to marking inconsistencies and the design of the tests.
Given the complexity and sensitivity of the data, Ofqual concluded, it is essential that the metrics stand up to scrutiny and that there is a very clear understanding behind the meaning and application of any quality of marking. They wrote that, “there are dangers that information from metrics (particularly when related to grade boundaries) could be used out of context.”
Context and accuracy are fundamental to the value of and trust in these tests. And at the moment, trust is not high in the system behind it. There must also be trust in policy behind the system.
This summer two sets of UK school tests, will come under scrutiny. GCSEs and SATS. The goal posts are moving for children and schools across the country. And it’s bad for children and bad for Britain.
Grades A-G will be swapped for numbers 1 -9
GCSE sitting 15-16 year olds will see their exams shift to a numerical system, scoring from the highest Grade 9 to Grade 1, with the three top grades replacing the current A and A*. The alphabetical grading system will be fully phased out by 2019.
The plans intended that roughly the same proportion of students as have achieved a Grade C will be awarded a new Grade 4 and as Schools Week reported: “There will be two GCSE pass rates in school performance tables.”
One will measure grade 5s or above, and this will be called the ‘strong’ pass rate. And the other will measure grade 4s or above, and this will be the ‘standard’ pass rate.
Laura McInerney summed up, “in some senses, it’s not a bad idea as it will mean it is easier to see if the measures are comparable. We can check if the ‘standard’ rate is better or worse over the next few years. (This is particularly good for the DfE who have been told off by the government watchdog for fiddling about with data so much that no one can tell if anything has worked anymore).”
There’s plenty of confusion in parents, how the numerical grading system will work. The confusion you can gauge in playground conversations, is also reflected nationally in a more measurable way.
Market research in a range of audiences – including businesses, head teachers, universities, colleges, parents and pupils – found that just 31 per cent of secondary school pupils and 30 per cent of parents were clear on the new numerical grading system.
So that’s a change in the GCSE grading structure. But why? If more differentiators are needed, why not add one or two more letters and shift grade boundaries? A policy need for these changes is unclear.
Machine marking is training on ten year olds
I wonder if any of the shift to numerical marking, is due in any part to a desire to move GCSEs in future to machine marking?
This year, ten and eleven year olds, children in their last year of primary school, will have their SATs tests computer marked.
That’s everything in maths and English. Not multiple choice papers or one word answers, but full written responses. If their f, b or g doesn’t look like the correct letter in the correct place in the sentence, then it gains no marks.
Parents are concerned about children whose handwriting is awful, but their knowledge is not. How well can they hope to be assessed? If exams are increasingly machine marked out of sight, many sent to India, where is our oversight of the marking process and accuracy?
The concerns I’ve heard simply among local parents and staff, seem reflected in national discussions and the assessor, Oftsed. TES has reported Ofsted’s most senior officials as saying that the inspectorate is just as reluctant to use this year’s writing assessments as it was in 2016. Teachers and parents locally are united in feeling it is not accurate, not fair, and not right.
How will we know what is being accurately measured and the accuracy of the metrics with content changes at the same time? How will we know if children didn’t make the mark, or if the marks were simply not awarded?
The accountability of the process is less than transparent to pupils and parents. We have little opportunity for Ofqual’s recommended scrutiny of these metrics, or the data behind the system on our kids.
Causation, correlation and why we should care
The real risk is that no one will be able to tell if there is an error, where it stems from, and where there is a reason if pass rates should be markedly different from what was expected.
After the wide range of changes across pupil attainment, exam content, school progress scores, and their interaction and dependencies, can they all fit together and be comparable with the past at all?
If the SATS are making lots of mistakes simply due to being bad at reading ten year’ old’s handwriting, how will we know?
Or if GCSE scores are lower, will we be able to see if it is because they have genuinely differentiated the results in a wider spread, and stretched out the fail, pass and top passes more strictly than before?
What is likely, is that this year’s set of children who were expecting As and A star at GCSE but fail to be the one of the two children nationally who get the new grade 9, will be disappointed to feel they are not, after all, as great as they thought they were.
And next year, if you can’t be the one or two to get the top mark, will the best simply stop stretching themselves and rest a bit easier, because, whatever, you won’t get that straight grade As anyway?
Even if children would not change behaviours were they to know, the target range scoring sent by third party data processors to schools, discourages teachers from stretching those at the top.
Politicians look for positive progress, but policies are changing that will increase the number of schools deemed to have failed. Why?
Our children’s results are being used to reform the school system.
Coasting and failing schools can be compelled to become academies.
Government policy on this forced academisation was rejected by popular revolt. It appears that the government is determined that schools *will* become academies with the same fervour that they *will* re-introduce grammar schools. Both are unevidenced and unwanted. But there is a workaround. Create evidence. Make the successful scores harder to achieve, and more will be seen to fail.
A total of 282 secondary schools in England were deemed to be failing by the government this January, as they “have not met a new set of national standards”.
It is expected that even more will attain ‘less’ this summer. Tim Leunig, Chief Analyst & Chief Scientific Adviser Department for Education, made a personal guess at two reaching the top mark.
2 is my guess – not a formal DfE prediction. With a big enough sample, I think someone will get lucky… https://t.co/e4RqNy51TY
— Tim Leunig (@timleunig) March 25, 2017
The context of this GCSE ‘failure’ is the changes in how schools are measured. Children’s progress over 8 subjects, or “P8” is being used as an accountability measure of overall school quality.
But it’s really just: “a school’s average Attainment 8 score adjusted for pupils’ Key Stage 2 attainment.” [Dave Thomson, Education Datalab]
Work done by FFT Education Datalab showed that contextualising P8 scores can lead to large changes for some schools. (Read more here and here). You cannot meaningfully compare schools with different types of intake, but it appears that the government is determined to do so. Starting ever younger if new plans go ahead.
Data is being reshaped to tell stories to fit to policy.
Shaping children’s future
What this reshaping doesn’t factor in at all, is the labelling of a generation or more, with personal failure, from age ten and up.
All this tinkering with the data, isn’t just data.
It’s tinkering badly with our kids sense of self, their sense of achievement, aspiration, and with that; the country’s future.
Education reform has become the aim, and it has replaced the aims of education.
Post-Brexit Britain doesn’t need policy that delivers ideology. We don’t need “to use children as ‘measurement probes’ to test schools.”
Just as we shouldn’t use children’s educational path to test their net worth or cost to the economy. Or predict it in future.
Children’s education and human value cannot be measured in data.