Here’s why AI is great at some things and rubbish at others

AI
Gary Marcus, founding CEO of Geometric Intelligence and the former head of Uber AI Lab. Image credit: EmTech Hong Kong

Despite all the hype over artificial intelligence as the driving force of all kinds of digital technologies – from virtual reality to self-driving cars – the truth is that current AI is very limited in its usefulness, and we’ve got a long way to go before it can live up to that hype.

That was the overall theme of an afternoon session at EmTech Hong Kong 2017 this week, in which several speakers noted that while AI is good at some apps, it’s terrible at others.

There are various reasons for this, and many of them come down to the simple fact that AI doesn’t “think” the way humans think. It may perceive the world around it via sensors – and most AI is designed with perception in mind – but perception alone isn’t the same thing as intelligence, which also requires characteristics such as common sense, planning, analogy and reasoning, among others.

Optimal conditions

Gary Marcus, founding CEO of Geometric Intelligence and the former head of Uber AI Lab, said that AI is currently very good for certain applications such as advertisement targeting, prioritizing search queries, transcribing speech and automating surveillance. But for other tasks, it only works well in optimal – which is to say unrealistic – conditions.

“It’s good at speech recognition … in quiet rooms with native speakers,” Marcus said. “It’s good at image recognition … in bounded worlds with a limited number of objects. And it can do natural language understanding … in narrowly bounded domains.”

In other words, current AI mainly works in specific, limited and predictable scenarios, the most prominent example being Google’s AlphaGo, which recently made headlines for defeating the world’s top human Go player, Ke Jie.

“The reason AlphaGo could do that is because Go is determined by set rules that don’t change,” Marcus said. “If you change the rules of the game, then AlphaGo can’t win.”

Marcus said that even AI researchers who acknowledge the limitations of AI and machine learning are overly optimistic, pointing to a recent article in Harvard Business Review in which Andrew Ng offered this rule of thumb for AI’s current capabilities:

If a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.

Marcus amended that rule as follows: “If a typical person can do a mental task with less than one second of thought and if we can gather an enormous amount of directly relevant data, we have a fighting chance – so long as the test data isn’t terribly different from the training data, and the domain doesn’t change too much over time.”

Because of those limitations, Marcus said, many of the things we want to do with AI still aren’t possible: conversational interfaces, automated scientific discovery, automated medical diagnosis, automated scene comprehension for blind people, domestic robots, and safe, reliable driverless cars.

Marcus insisted that AI isn’t robust enough yet to handle such tasks. For example, an AI robot may be able to pick up an elderly patient and place him or her in a bed 95% of the time – but the 5% failure rate that makes it unusable for such a task.

Statistics isn’t knowledge

The reason AI isn’t ready for prime time yet is simple enough: it’s really difficult.

“Engineering machine learning is hard, because it’s difficult to debug, revise incrementally or verify,” Marcus said. “We have no procedures for reliably building complex cognitive systems.”

AI
Noriko Arai, professor at the Information and Society Research Division of Japan’s National Institute of Informatics. Image credit: EmTech Hong Kong

The other challenge is that AI essentially works by processing big data as statistics – and statistics is not the same thing as knowledge. “If you chop an iPhone in two with an axe, it won’t work anymore, but you don’t have to try that at home to guess the outcome,” Marcus said.

Noriko Arai, professor at the Information and Society Research Division of Japan’s National Institute of Informatics, made a similar point as she described a project that involved developing an AI-powered robot to see if it could pass the entrance exam to the University of Tokyo at Todai.

Part of the exam included writing a 600-word essay on 17th century maritime trade. While the robot wrote a better essay than most students, she said, the point is that it didn’t understand what it was writing.

“It’s all statistical,” she said. “No AI is good at reading. It can search for keywords and produce an answer that is statistically correct. It only appears to understand.”

She added that the robot didn’t pass the Todai exam, but it did place in the top 20%. That said, it would have passed the entrance exams for 70% of all other universities in Japan, Arai said – but, again, without actually knowing anything.

Arai calls this “artificial unintelligence”.

A little bit of emotion

Another way that AI falls short of the benchmark of human intelligence is empathy, said Phil Chen, executive chairman of Soul Machines, a company that is working on AI-powered interactive avatars. (Imagine Siri or Alexa as a conversational computer-animated character, and you get the idea.)

AI
Phil Chen, executive chairman of Soul Machines. Image credit: EmTech Hong Kong

“A key element lacking from AI is the ability to understand emotional intelligence,” he said, citing a centuries-old debate that human intelligence is defined as much by emotions and relationships as cognitive reasoning.

Put another way, AI needs to be much better at understanding not only the relationships between different things, but also how it relates to humans and vice versa.

“It needs the ability to tell a joke or read people’s emotions,” Chen said. “It needs to be able to detect tone of voice, breathing patterns, cadence of speech, and so on.”

Of course, none of this means that AI can’t achieve any of these things – it just can’t right now. The question is how long it’s going to take for AI to achieve its full potential and bring about advanced apps like self-driving cars, domestic robots and diagnosing cancer.

The timelines aren’t clear, but Marcus said that it’s going to take interdisciplinary collaboration on a global scale. “It can’t be just Google working on this.”

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.