An educational algorithm just sat its first exam and did not pass the test

algorithm
Image credit | Kirill_M/bigstockphoto.com

This year, because of the pandemic shutting schools, exams were marked using an algorithm. And across large parts of the world, students and teachers alike were shocked by the results.

August is generally the month when attention focuses on exam results that are vital to students and will determine whether they can follow their dreams or whether they need to choose another path.

This year, almost universally, students’ grades were calculated downwards. There are many examples of students just not understanding how on earth the computer came up with the results it did.

Many are now demanding that the workings of the algorithm are laid out for examination but the designers are refusing.

There are, at best, several flaws. Not least among them is that the algorithm was designed for each school and not each student.

There is also an important lesson to be learned from this high profile scandal and that is that algorithms are programmed by people and people will be biased.

A clue, from the UK (where a Chemistry teacher is appealing the results of his entire class) is that the education regulator, Ofqual said in a statement that teachers’ predicted grades were ‘unreasonably high’.

So, the thinking might have gone, if teachers tend to over-estimate their students’ grades, then we will programme the algorithm to under-estimate them.

It is highly debatable that teachers do over-estimate their charges’ grades. It is much more likely that they do the opposite, so as to set expectations at levels where students are not crushed by results.

The combination of realistic or slightly pessimistic predicted grades by teachers and by the possible bias of the programmers would certainly produce the results that have thrown students around the world into a group depression and the ruination of summer holidays as they have to hit the books and not the beach in order to prepare for new exams in October or November.

That this algorithm was designed in a hurry to provide a solution to an almost untenable problem is not in doubt.

What is very much in doubt is the extent that bias was involved in the results. It is a lesson for our entire industry – that the biggest challenge is to make sure that no bias is part of the machine learning process.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.