
ChatGPT and AI in general are having a bad month with a series of blunders that once again demonstrate that the machines are far too stupid to do anything that could be remotely described as intelligent in human terms.
ChatGPT kicked off the series of howlers with a series of corrections to CNET articles mostly written by AI which contained errors which in turn has led to a complete review of all content where ChatGPT or other AI has had a hand in its creation.
Soon after, someone instructed ChatGPT to “write a song in the style of Nick Cave” and sent the results to the musician. Cave was less than impressed, describing the lyrics as a “grotesque mockery of what it is to be human”.
Shortly after that, it was discovered that armies of humans are being employed to stop ChatGPT from going off the rails as many chatbots before it have done.
While all this was going on, Reuters reported that a Tesla engineer testified in July last year that the company’s demonstrations of autonomous driving in 2016 were faked (aspirational in Tesla-speak). Not a single drive exhibited was performed entirely by the machine, with Tesla falsely stating that the drivers were in the driving seat for safety purposes only.
AI is still dumb
All of these blunders have been caused by a single fact – none of these AIs have any causal understanding of what it is that they do.
Instead, they twist data characteristics statistically until a line can be drawn between certain outcomes and therefore a conclusion drawn. There is no fundamental understanding, exemplified by the fact that Tesla’s machine vision system (or anybody’s, for that matter) does not know that it is driving down a road … or even what a road is.
The practical upshot of this is that none of these systems have the ability to adapt when something changes, or they are presented with something that they have not been explicitly taught. Even trivial changes like changing the colour of road markings from white to yellow is enough to throw a complete spanner in the works.
This is why ChatGPT is frozen in time, so that it does not have to deal with the inevitable march of history and incorporate new events into its dataset. This is what causes ChatGPT to make factual errors that even small children can pick out, because they have a causal understanding of the issue and ChatGPT does not.
ChatGPT certainly has a useful function – but this will be limited to tasks like writing an outline of a press release which is then edited in filled in by a human. Beyond this, I think its best use is for entertainment where nothing that the system says should be taken seriously.
ChatGPT creates illusion of sentience
ChatGPT has created a lot of buzz and discussion on the Internet, and once again the idea is being bandied about that this moves AI closer to general AI or becoming sentient.
I view ChatGPT as a clever innovation capable of creating the illusion of sentience. But it is an illusion that is easily shattered with even a small dose of reality. Hence, in the quest for general AI and machine sentience, ChatGPT does very little. I continue to think that ever-increasing model size and the consumption of ever more compute power is not the way to get machines to drive vehicles safely and easily.
This is why a valuation of $30 billion for OpenAI is simply absurd, because its approach to AI is very unlikely to provide the breakthrough that will be necessary to get anywhere near that valuation. Instead, I think that this – combined with its apparent disdain for the profit motive – leaves its valuation on fundamental terms much closer to $0 rather than $30 billion.
I continue to think that the harder problems of AI will be solved by combining both software and deep learning systems as these complement each other quite well and are already beginning to show some results.
For ChatGPT and Tesla, it is becoming increasingly clear that the hubris is starting to wear very thin.
Be the first to comment