Information warfare is our future, according to Megatron (great idea: call an AI system something that rings Doomsday bells) during an Oxford Debate on where AI is leading us.
It was the first high profile demonstration of AI being part of a debate on its own future.
Oddly (or perhaps not) Megatron was quite capable of arguing for and against the uses of AI.
It argued that AI could never be ethical, it is a tool and therefore (as we have said) not capable of being ethical itself. It added that we are not ‘smart enough’ to make AI moral.
It also argued that the most effective AI will be embedded in our brain (Megatron swallowed Wikipedia as part of its training, including a lot of stuff on Elon Musk).
While those arguments are not that surprising or alarming, its take on the future of information was bleak.
Megatron predicts a future of information warfare.
Information will be more important than even goods and services and will define the economies of the rest of the century, according to to the AI speaker at the debate. And we will know so much about what people are doing, where they are doing it and what they want and need that this information will be used in ways we cannot even imagine.
Information warfare is upon us.
So, do we stop now? We possibly still could but is there an entity out there that is capable of stopping the momentum of AI development?
And too many people believe that our future lies in the glow of enlightened guidance from AI.
There is, of course, the problem of Pandora’s Box. It has been opened, people have unleashed AI that can make its own next version.
With mixed results.
Humans solve problems based on experience and instinct. AI only knows what it gleans from the information it is trained on.
It also does entirely logical things, that we would never do because it is not part of our experience. One experiment was based on a race, with rewards along the way. The AI never completed the race because when it got to the first reward, it circled back to pick up the reward again. And again. And again.
AI is here to stay and it is up to us (well, the scientists and tech companies) to work out how and if we try and limit its potential.
As well as the future of AI guiding us to enlightenment we are equally likely to find ourselves staring down the barrel of a future of information warfare, which will be as damaging as the ordinary kind.
The only way to stop an AI arms race is to stop using AI. Which is not going to happen.