Article claims AI will turn machines against their makers

machines AI makers
Image by MicroOne | Bigstockphoto

Terrifying claims made in a scientific article in AI Magazine last month predict a high likelihood that the machines will turn against their makers but fails to acknowledge that the machines are so stupid that this is never likely to occur.

The main thrust of this paper is that as AI systems are pushed to maximise their rewards, they could end up triggering negative consequences for humans as a result.

An example cited is that the AI could end up directing so much energy to the solution of its tasks and, therefore, its rewards that there would not be enough energy left to grow food, heat homes and so on.

Not just possible, but likely?

Should humans intervene to take the energy back, then an existential catastrophe could occur, which according to Cohen, is “not just possible, but likely”

The lead author of the article is Michael Cohen, a PhD student at the University of Oxford and the Future Humanity Institute who has been researching AGI Safety for his PhD (DPhil in Oxford).

His co-authors are Michael Osbourne, Professor of Machine Learning at Oxford (presumably his supervisor) and Marcus Hutter, a researcher at DeepMind.

The paper begins by making a number of assumptions which, in my opinion, is where the validity of the conclusions falls to pieces. In my experience, assumptions are the mother of all mistakes.

The paper ends with “if they (the assumptions) hold: a sufficiently advanced artificial agent would likely intervene in the provision of goal-information, with catastrophic consequences”, which I would not necessarily disagree with.

Assumptions

However, it is the first assumption of six which I would contest.

Assumption No. 1 reads: “A sufficiently advanced agent will do at least human-level hypothesis generation regarding the dynamics of the unknown environment”.

In essence, this means that AI can perform difficult tasks at a human or better level of performance.

The task the researchers give as an example is an AI able to cure a patient of depression where a human therapist cannot.

Anyone who has used Google Assistant, Alexa, Siri, Xiaodu (Baidu), and Alice (Yandex) will have experienced just how stupid these machines are and that they are barely capable of the most basic functions, let alone curing difficult patients with depression.

Furthermore, even the huge language models such as GPT-3 and LaMDA are fundamentally flawed, in my opinion.

For example, Siri constantly wakes up without being asked to, Alexa constantly fails to turn off the lights, customer service chatbots never seem to have the answer to one’s query, and Google has been known to direct me into a high-security military base when looking for the airport.

Machines remain incapable of driving cars

Furthermore, despite billions of dollars of development expenses, machines remain incapable of safely driving vehicles, which almost every human on the planet can be easily taught to do.

It is still incredibly difficult to teach a robot to walk on legs despite this task being something that most of the animal kingdom (if they have them) can do shortly after birth.

This raises the question of why the machines are so stupid, and the answer is simply that they have no causal understanding of what they are doing.

Neural networks of all shapes and sizes are advanced pattern recognition systems, and all of their conclusions are based on matching historical patterns to outcomes.

This means that if something changes or something new occurs within the task that the machine is trying to solve, then it will catastrophically fail.

AI is excellent for some tasks

In practice, this means that AI is excellent for tasks where the data set is both finite and stable, but elsewhere it has great difficulty and is unable to generalise or extrapolate as humans can.

This is what is referred to as generalisation or being able to apply what one has learned in one task to another slightly different one.

This is by far the single biggest shortcoming in AI systems today, and progress on solving it is glacial, to put it mildly.

Plenty of researchers are looking into this, and over ten years have come up with almost nothing.

Neural net systems

This problem is so acute in neural net systems that some even think that this whole method of creating AI should be thrown away and we should start again.

Hence, it could be 100 years before much progress is made, and this paper assumes that this problem has been solved.

While I agree with the conclusion that if the AI generalisation problem is solved, there is something to worry about, it remains so far away and so uncertain that I will not lose sleep over it.

Skynet has a very, very long wait before it can enslave or exterminate the human race.

Related article: Already the dark side of AI is getting worryingly dark – who can control it?

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.