There is considerable disagreement over the dangers presented to the human race by AI, but I think that it will be the laws of physics that prevent dystopian predictions from coming true.
At the recent Tech Crunch conference, Google’s head of AI was quick to dismiss Elon Musk’s concerns that AI could present an existential threat to humans or cause a third world war.
Artificial super-intelligence is when machines become more intelligent than humans. To achieve this, computers need to continue evolving at an exponential rate for the next 23 years and three huge AI problems need to be solved. RFM has identified these problems as:
- The ability to train AIs using much less data than today
- The creation of an AI that can take what it has learned from one task and apply it to another
- The creation of AI that can build its own models rather than relying on humans to do it.
Progress against these three goals is incredibly slow, and only the very best companies are making any real progress at all. Everyone else claims to be working on AI, but in reality they’re using advanced statistics to make predictions that have an improved probability of being correct. Even with the best minds working on these, I think it will be decades before these problems are even close to being solved.
However, the real reason why I think AI will not overtake the human race comes down to Moore’s Law.
If one extrapolates the exponential pace of computer capability over the last 40 years, one can predict that computer intelligence will overtake that of humans by 2040.
This is what most of the predictions of artificial super-intelligence are based on, and where much of the fear comes from.
However, I do not think that the current breakneck pace of Moore’s Law can continue. Ten nanometers is currently the cutting-edge geometry for semiconductors – and beyond around 5nm, the laws of physics start to misbehave. This means that doubling the number of transistors in the same area of silicon every 18 months will no longer be possible using the transistors we know. It is this doubling that has underpinned the exponential improvement in computer capability over the last 40 years and without it, I think this improvement will slow to a crawl.
In order to continue beyond this point a new form of transistor is required which could prove as fundamental a change as the shift from triode vacuum tubes to silicon transistors. Alternatives to silicon transistors are at such an early stage of development that it seems inevitable that Moore’s Law will grind to a halt long before a viable alternative is found.
I suspect that this will mean that the pace of improvement of computer capability will also slow down to the point where artificial super-intelligence drops way below the visible horizon.
Hence, while I think that Elon Musk is right to think that humans are in trouble if machines ever become more intelligent than man, it is so far away in the future that Google is also right not to be worried about it.
Dr. Moore can be content that he has added saving the human race to his list of accolades.
This article was originally published at RadioFreeMobile