AI might be an addict, or at least have an addictive personality. As with most flaws in AI systems, it is because humans ask it to do human things. AI is not wired that way.
For several years, scientists have been investigating whether AI might be addictive because of how it sets out to achieve its goals.
The initial discovery that all might not be right in the garden of good and evil was when scientists were programming an AI to run a racetrack. Along the way, the AI could pick up rewards. So, instead of rushing to the finish line, it went into a spiral so that it could pick up as many rewards as it could because that seemed to be its goal.
People think that AI might be ‘evil’ or it might be ‘good’, but that is not the point. It is simply not human, so it will merely optimise a process.
We are close to the moment when we ask AI to optimise our climate campaigns and then find ourselves wiped out because we are to blame, but we need to be careful.
Ask an AI to clean your kitchen, not by a step by step set of instructions but by how much cleaning fluid it uses, and you will get home to find a sea of cleaning fluid.
Businesses might think that this has nothing to do with them, but it goes to the heart of the problem.
An AI might optimise a customer service process in ‘unexpected’ ways (in human terms). Remember when you heard war stories of poorly defined KPIs, and suddenly the customer satisfaction levels plummeted? This was because a KPI was defined on how many calls an hour an agent could manage (for instance). So agents were answering a call and hanging up immediately.
Full marks for the number of calls per hour, an army of angry customers.
The solution lies in how we programme AI. If we do not find a balance between rules to be followed and goals to be achieved, AI might well end up messing up entire business processes.
Let’s ensure we have these problems resolved before we ask AI to answer the Big Problems – we might find ourselves optimised out of existence.