Let’s imagine what North Korea could do with artificial intelligence

North Korea
The Mansudae Monument in Pyongyang, recently. Credit: Attila JANDI / Shutterstock, Inc

We talk about the pros and cons of artificial intelligence in the context of relatively open societies. But what does AI mean for a walled-off centralized dictatorship like North Korea?

That question was put by this interview in the Korea Times to Yuval Noah Harari, an Israeli historian from the Hebrew University of Jerusalem, whose most recent book Homo Deus: A Brief History Of Tomorrow, explores the impact of technology on human development from both a historical and futuristic aspect.

Harari envisions three ways in which AI could impact North Korea:

  • The regime could fall even further behind the world in IT advancement and eventually collapse
  • The regime could harness AI to solidify its Orwellian grip on power (think of what a centralized dictatorship could do with wearable biometric sensors, location trackers and big data powered by an AI algorithm)
  • It could actually leapfrog South Korea in adopting AI for apps like self-driving cars.

The latter is interesting because it anticipates the resistance of humans to implementation of self-driving cars:

Consider what might happen if South Korea tries to ban human drivers and switch to a completely self-driving transport system. South Korean citizens privately own millions of vehicles, and many might object to losing their freedom and their property. There will also be objections from taxi drivers, bus drivers, truck drivers and even traffic cops, who will all lose their jobs. There will be strikes and demonstrations. The initiative could also be forestalled by legal and even philosophical conundrums if a self-driving car causes an accident, whom do you sue? Or suppose a self-driving car lost its breaks due to some malfunction, and has to choose between driving forward and killing five innocent pedestrians, or swerving to the side and endangering its own passengers. What should the car do?

All of these, Hariri says, aren’t problems in an underdeveloped centralized dictatorship because there aren’t many vehicles and worker strikes are illegal.

Hariri also said the rise of AI could impact the relationship between North and South Korea in other ways, particularly in terms of cyberwarfare and potential reunification and integration:

AI is likely to transform the culture and even psychology of South Koreans, and if North Koreans do not undergo a similar revolution, the gap between the populations would become bigger than ever before. It is beginning to happen even today. Just think of the cultural gap between a South Korean teenager glued to her smartphone, YouTube, Instagram and Twitter and a North Korean teenager who might well be dumbfounded to see people walking down the street and constantly looking at small screens in their palms.

I recommend reading the whole interview. (Hariri also talks about big data collection, Donald Trump and the rise of nationalism in an age of global problems, and the impact of technology on religion.)

It’s an interesting perspective from someone who isn’t a specialist in technology, and who sees technology as a kind of superpower that humankind has harnessed for its own purposes.

That may sound fantastical, but technology has always been a tool to help people accomplish what they could never do as mere humans. Easy examples include the airplane enabling us to go from Point A to Point B several thousand miles away in mere hours, or a smartphone enabling us to order and pay for a coffee at before we arrive at the café.

Writer Warren Ellis, in his collection of talks, Cunning Plansequated the age of touchscreen technology with supernatural magic – just as a wizard in medieval lore would wave his finger at a magic mirror to make things happen, we do today with tablets and smartphones. Yes, it’s not the same as manipulating the elements or firing thunderbolts from a cane or whatever. But metaphorically it’s the same basic concept: technology enhances our natural abilities and gives us new ones.

Viewed in that context, AI is essentially an outboard brain that enables humans to process information faster than our real brains ever could.

To be sure, AI is wildly overhyped at the moment. But these capabilities are on the way. AI can already beat us at every game that requires thoughtful calculations. Properly implemented, it can drive cars better than we can (or at least more safely). And it is becoming increasingly predictive – which basically means one day our devices and homes will come equipped with precognitive virtual agents that know what we want before we do.

Imagine the impact that will have on human development over the next 50 years. Or even the next 20.

And yet, as Hariri points out, there will be resistance, from people whose jobs and livelihoods will be lost to automation or are simply afraid that AI is Skynet waiting to happen. Imagine also the potential security threat vectors of a world in which humans are dependent on these tech-enhanced abilities.

That’s no reason to stop AI development, of course. But it’s always a good idea to keep one eye on the possible impacts of technological disruption, both positive and negative, on humankind. As Hariri points out, adoption of disruptive and transformational technology doesn’t guarantee a single deterministic outcome. We need to be aware of all possible outcomes – to include the ones involving paranoid isolationist dictatorships – and prepare for them.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.