The exit of Geoffrey Hinton from Google has caused a stir and reignited the AI safety/Armageddon debate, but the reality remains that humans are at much greater risk from humans than they are from machines, which remain as dumb as ever.
Geoffrey Hinton is widely known as the “godfather of AI”, as it was he and his team who first created neural networks during the 1970s and 1980s and then finally got them to work in 2012, sparking the real-world use cases we see today.
Following his exit from Google, he has been speaking to the media about the dangers of AI. I think some of them are real, but for some of the others, there is no basis in data to support them. The media, ever eager for clicks, is spinning his departure as Oppenheimer quitting the Manhattan Project. But the reality is that this is mostly about retirement (which he admits) as Dr Hinton is now 75 years of age.
In his interview with the New York Times, his concerns basically boil down to two things.
This is the one that pretty much everyone agrees with. ChatGPT and other LLMs have a surprising ability to create highly credible text, which – combined with Midjourney’s image generation – raises the possibility of a flood of information so credible that consumers will not be able to tell fact from fiction.
As usual, the focus is on the tool or the technology and not the human malevolence that will be required to train and incite or order these AIs to commit such acts.
This is why the focus also needs to be on ensuring that bad actors are not able to access the latest technology, and that systems are created that are capable of spotting the fakes from the real thing. These safeguards are much harder to implement but have the benefit of not hampering the development of these systems for legitimate purposes, which a blanket ban or restriction is very likely to do.
Part of the problem here is that the genie is already out of the bottle. Anyone with a bit of skill and some compute power and data can create one of these algorithms. I suspect that the devil is in the detail – to make them really good takes a huge amount of skill, which is currently in very short supply.
This adds a potential control point to help control the spread of this type of misinformation and propaganda, but the biggest problem is that no one really knows what these systems are capable of in this context. It is this fear of the unknown that is driving the calls to slow or stop development or heavily regulate AI.
2. AI Armageddon
Dr Hinton expresses the opinion that the time when AI is smarter than humans is no longer 30 to 50 years away but is now much closer. This is where the AI community divides, with the massive data camp going one way and the neurosymbolic crowd going the other.
The massive data crowd are of the opinion that the spark of sentience exists as a mere function of complexity and quantity, meaning that with enough data and compute, it will magically appear. All of the scientific literature indicates that even the best machines have no grasp of causality, which is why they still hallucinate and invent facts and data.
Without a grasp of causality, there is no way any of these systems will ever produce sentience or superhuman intelligence. While we use the systems that Dr. Hinton invented, this is almost certain to never happen.
The problem is that these chatbots are so good at simulating sentience that humans tend to anthropomorphise them. I think this is what’s leading to these views that superhuman intelligence is just around the corner.
The reality is that it isn’t – these chatbots remain as stupid as ever. My 7-year-old can easily outwit them when they are both presented with something that they have not been explicitly taught.
Hence, I continue to think that the real threat of generative AI will be driven by humans who desire to commit bad deeds and use it to do so. This is where the focus on prevention needs to be.
I think we also need to bear in mind that blanket regulation has the potential to damage the development of AI that has legitimate, lawful and very profitable use cases.
In short, still no sign of those killer robots coming over the hill.
Be the first to comment