Solving the School Exam Dilemma: how to mark the markers
We have known since early this year that because of Covid-19 and lockdowns, public examinations would not go ahead this summer. Accordingly, the government announced that teachers would be awarding grades for their own pupils based on data and evidence collected for each pupil.
As opposed to the external verification of exam performance, there is an obvious potential problem with such a plan. How can we design a mechanism to encourage accurate portrayal of pupil performance, when there may be an incentive to exaggerate?
That’s not to criticise teachers and schools. It’s just an objective assessment of the circumstances they find themselves in.
For a start, there is the simple reality of peer pressure. There may be no league tables, but we’re far from schools seeing each other as anything other than competitors or comparators, if only because they are accountable for their results. Then there’s parental and pupil pressure. All are acutely aware that the grading system might be tenuous, and that awarded grades will still play a central role in qualifying young people for their next steps.
Thirdly and perhaps more subtly, the assumption of grade inflation may be self-fulfilling. All schools will assume all other schools also fear getting left behind and behave accordingly. And this self-fulfilling process will be fuelled by the perceived absence of any threat or sanction if teacher assessed grades (TAGs) are substantially different from tolerance levels or beyond understood variances.
Translated, the fear of giving grades that are too low is much greater than the fear of giving grades that are too high.
Separately, I have analysed this problem using simple game theory to assess the grade inflation bias. In such an analysis, the benefits to schools of producing accurate grades, low-balling or high-balling depends on the strategies adopted by other schools.
It was easy to show that there is no expected advantage for any school to low-ball grades. Sadly, there were equally few advantages in targeting accuracy. The best option, in the absence of any change to incentives, is always for schools to high-ball their marks.
An appropriate policy response would involve imposing some costs to grade exaggeration. The external moderation of marking by exam boards is a start. But it’s unclear whether the thought that “Exam boards may want to discuss the centre’s grading decisions” and “where the exam board disagrees with the centre’s grade, it can withhold results” are sufficient threats.
And it’s fair to schools that exam boards have already begun to agree a reduction in their fees in acknowledgment of schools’ increased costs for collecting data and evidence. Exam boards still need to fund systematic moderation after all. But they could also be doing more to support school evidence collection as they go about their sample checks.
More importantly, some of the information on Ofqual’s sampling methodology has been published, but it’s not enough. They look to be concentrating mostly on outliers, which could let off a host of centres who high-ball their grades through intra-mark inflation (Cs becoming Bs, for example) but not enough to raise flags. The cumulative effect could be huge.
The DfE consultation documents said that: “Exam boards will put in place arrangements for external quality assurance (QA) to check each centre’s internal QA process […]”. But what we have so far is still too cursory.
It is now early July. We still don’t quite know what is going on prior to results day. And we don’t seem to have worked out what we will do afterwards to hold schools who high-ball to account without penalising students. The deadline for teachers to submit their marks has passed, and results are expected in August. There is still time to get this right, but not a lot.
The government must provide more support to schools for the collection of the data and evidence required of them to corroborate their grades. Ofqual must also ex ante announce their specific monitoring and moderation process. And they must commit to an ex-post analysis of marking decisions.
Failing to do so risks the validity of our qualifications system this year. Which will hurt most those who choose to play fair.