
There is only one problem with AI, and it is an old one: data entry. Data entry has been the bane of any life that is immersed in IT. The billing system update, the customer service call where details are captured incorrectly or large scale, big bang, migrations that can bring down CEOs. Data entry is the Achilles Heel of the digital age.
‘Garbage in, garbage out’ is the old saying. And it is no less vital when it comes to AI. In fact, given that we are thinking seriously about handing over vast swathes of decision making to AI robots, it is even more vital that we get it right – from the very beginning.
Doomsayers refer to scenarios where one robot given the green light to kill a human might – probably after an unfortunate short circuit during a dramatic lightning strike – reprograms itself to kill all humans. That is a little far fetched.
Thinking instead of the trading decisions, the legal decisions, the HR decisions – particularly the HR decisions – that we will hand over to AI, the problem of data entry leaps into focus.
Of course, being humans living in the digital age, our first reaction is to think, “We need to get the human errors out of data entry, so we’ll automate the data entry process.” Obviously we cannot input trillions of data entries by hand, that would merely increase the danger of, er, human error. The critical bit is the step before that.
Those people who make the decisions about what data is entered, and therefore what data the AI machines then look for, map, decide what to do with, add it to their view of the world, those people bear the future of the world on their shoulders.
We are all, let’s face it, slightly biased, slightly opinionated. And we can be swayed. If we didn’t and couldn’t we wouldn’t be human in the first place, at least not in any fun or interesting way.
Imagine the person who is in charge of (or even part of the team) that decides the criteria for hiring or not hiring someone. A majority of the team like a glass of wine in the evening, or they hate turkey, or they were bullied at school, or they drive a certain car.
OK, this is an example that many would respond to by saying, “Well, that’s how humans work. You employ people who like what you like, have the same outlook. It actually gives you an edge.” It is called ‘culture’.
In some scenarios, such examples are irritating at best. In others, when it comes to the Big Stuff – where academia is heading, where politics is heading, where ethics is heading and, obviously, where conflict is heading – it becomes a huge responsibility for the data decision makers to be fair, even minded and open. And they also have to be future proof – what made perfect sense two decades ago might now be seen as wrong or biased.
It is almost as if we have to dehumanize humans in order to properly automate robots. Start with a glitch and the problems will multiply and multiply.
We humans have to be very careful about that.
Be the first to comment