On the 13th August, around 300,000 UK students received results for exams they could not sit. Tears and confusion are not unusual on results day but the nationwide uproar following this year’s release of results was unprecedented, as everything pandemic-related tends to be.

39.1% of pupils were downgraded, meaning they received a mark at least one grade below their teachers’ predictions. Students took to social media and the streets to protest the ‘biased’ algorithm that determined their grades and dashed their dreams.

The algorithm’s output was based on three main factors:

  1. Historical grade distribution: for each institution, the percentage of past students who had achieved each grade in each subject.
  2. Prior attainment: Key Stage 2 or GCSE grades of existing and past cohorts.
  3. Expected national grade distribution for each subject based on previous years’ attainment.

Missing from this list is students’ predicted grades. Ofqual, the qualifications regulator in England, justified this, stating, “the likely inflationary effects of using unstandardised teacher estimates would undermine confidence in the grades awarded”.

The great aim of this algorithm was standardisation. Arguably, this was a greater aim than providing a true reflection of individual students’ attainment.

The loudest complaint against the algorithm was that it reinforced inequality. For institutions with less than 15 students taking a subject, more weight was given to teachers’ predictions, which tend to be overestimated.

Thus, private schools had twice as large an increase in top grades compared to comprehensives and high performers in historically low performing schools were penalised. Such outcomes do nothing to combat existing educational inequality.

The Commission on Inequality in Education (2017) found that a very low proportion of students who receive free school meals achieve at least 5 A* to C grades at GCSE compared to their peers. Moreover, a disproportionately high number of high scoring students come from the richest 10% of households.

The incident has also provided another unwelcome example of algorithmic bias which occurs because algorithms use existing data that may be unrepresentative of the population.

Predicted grades are not perfect. Between 2013 and 2015, only 16% of predictions were accurate while 75% of grades were overpredicted. But which is worse: the inaccuracy of teachers’ predictions or an assessment regulator that can produce a discriminatory algorithm without flinching? It’s a difficult choice.

Ofqual have now decided to use teacher predictions for both A-Level and GCSE results, which has contributed to the huge increase in top GCSE grades released on the 20th August. Nevertheless, the question of how to best reflect the nature of reality is still up for debate.

Some argue, like Ofqual, that algorithms, however impersonal, are the answer. Others say that a computer could never know their ability better than themselves and their teachers.

Every year, thousands of students are ‘downgraded’ because their exam performance did not match their predicted grades. It is upsetting but it is acceptable.

Being downgraded based on data that largely has nothing to do with you has proven to be, in the eyes of many, unacceptable.

Photo by Ivan Aleksic on Unsplash

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply