Already the dark side of AI is getting worryingly dark – who can control it?

dark side AI
Image by Omozo | Bigstockphoto

The dark side of AI takes many forms. For a long time, the discussions were based on whether AI-powered robots or AI-driven computers would take over our jobs. At the time, we consoled ourselves with the idea that AI could do repetitive jobs really well, but anything that requires some kind of emotional intelligence will be beyond its capabilities.

We also balanced the dark side of AI against the benefits. The cost savings, the efficiencies and the knock-on effects on climate change and, therefore, our planet were all discussed and verified with sage nods.

Then we thought about the dark side powering fraud and then, obviously, fraud prevention. We now assume that 5G and whatever comes next will rely on AI to act quickly enough to counter cyber threats; in fact, we rely on AI to predict where and when the next attack will come.

We thought that the dark side of AI was extending into orchestrating attacks on mission-critical infrastructure and assumed that those in power were panicking badly. They were, but it turns out that most of the IT infrastructure running, for instance, the Colonial Pipeline business was installed before the internet was a ‘thing.’ Why, anyone in authority thought, would we need modern, agile and reactive systems to run something as ‘passive’ as an oil delivery network.

And we thought telecoms was suffering from the weight of legacy systems!

The sheer volume and increasing sophistication of the dark side of fraud are overwhelming. We gave the attackers a huge advantage when we stuck our heads in the sand and hoped cyberattacks would go away or be someone’s else’s problem. This is still the case.

Cyberattackers are now the dominant players. And AI is part of their arsenal.

Worryingly the dark side of AI goes further. AI is being used alongside facial recognition to identify people. And now, kill them.

We sort of knew that police forces are using facial recognition to identify known bad guys. That is good. We also know that the dark side uses AI-powered drones, weapons, and facial recognition to target people and kill them. According to the UN, more worrying is that ‘slaughterbots’ are not only killing targets but making the decision to kill them without human intervention. A line has been crossed.

Yet, AI is for good, too (the ITU has a huge init. Drones with thermal imaging cameras are saving animals in the wake of forest fires. Vaccines against coronavirus would not have been developed as quickly without AI. Brain interfaces that enable disabled people to play music or walk again must be good.

As with any other technology before it, the problem with AI is that it is neutral, inert. And it is humans that take it to the dark side or into the light.

The question is, now that AI machines are being used to make better AI machines, what will AI conclude? Is the future some Star Wars or Terminator story where machines themselves take their artificial intelligence to the dark side? Or into the light?

Related article:

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.