The problem is that humans are even more curious than cats, particularly when it comes to things like AI. Cats are curious, up to a point, and then sleep or annoy humans that don’t like cats.
This would be fine but curiosity might not just kill the cat, it might kill humans too.
AI is a classic case of human curiosity that will run and run.
We might hide behind the business benefits of implementing AI and automating anything that moves (and the bottom line says that is a great idea). But let’s face it, humans do things because we wonder whether we can and what the world will look like afterwards.
Almost every day, the news has some new element of AI-ness in it.
One article ponders whether AI could be trained well enough to write for the New Yorker (we’ll leave you to do the jokes about that one).
Another (in a column of funny snippets from weird newspapers) describes the robot priest that has just been installed at a 400 year old temple in Kyoto. “With AI,” says a (human) priest, “we hope it will grow in wisdom to help people overcome even the most difficult troubles.”
Another leads with a picture of an ‘AI pioneer’ looking thoughtful, actually slightly sad, above an article that says he wants his algorithms to understand the ‘why’ not just the how.
The problem with all this is that if algorithms are able to find out the ‘why’ it may not look good for us curious humans. The answer to the question (apart from ‘42’ obviously) may well be that a more dramatic approach is needed if the planet is to be saved, that we should stop building our future on technologies that are implicitly vulnerable and who knows what other practical, uncurious and unpassionate solutions.
The real point is that our curiosity will not allow us to remain in charge of AI. We will want to see how far it can go and when we find out it may not go well at all.
Where was that temple again?