This year’s Mobile World Congress wasn’t all about 5G hype – there was plenty of hype around artificial intelligence as well. At least in the press releases. On the exhibition floor and in the conference sessions, there was (mostly) more grounded conversation about what AI can and can’t do, the challenges of implementation, potential apps and business cases, or sometimes just trying to figure out just what the hell AI is.
I sat in on some AI-themed keynotes and panel sessions on Wednesday. Here are six quick takeaways I learned about the wild world of practical AI.
1. AI products aren’t that smart yet
Telefónica has been harnessing AI not only for its Aura platform but also internally for improving operations and processes. The challenge with the latter, said Angela Shen-Hsieh, Telefónica’s director of predicting human behavior, is that it’s a “roll your own” process because there aren’t many AI solutions that are smart enough yet to do what Telefónica needs them to do.
“We have to learn how to do it ourselves, which limits how much we can scale,” she said. “We’re waiting for the next wave of technology to address specific problems.”
2. For internal usage, AI needs to be cultural, not a department
Gabriela Styf Sjöman, VP of group networks at Telia, said that the decisions about how to prioritize where artificial intelligence is applied to internal processes are distributed rather than centralized – and that’s a good thing.
“You don’t want one AI team directing it – it has to be a cultural thing where everyone accepts it,” she said. “Otherwise you have engineers who always ask, ‘Are you sure this algorithm is really making better decisions than my manual work?”
3. AI needs the right data, not all the data
When big data first became a buzzword a few years ago, we heard a lot about how the more data you threw into the analytics engine, the more insights you could pluck from it, including insights you might not have been looking for in the first place.
Turns out that doesn’t work for artificial intelligence designed for specific apps (as opposed to generalized AI that is designed to replicate human thinking).
“You need to choose the right data for AI systems to realize the benefits,” said Georg Polzer, chairman and co-founder of Teralytics. “AI is built by the data we feed it.”
Picking the right data also means having a clear understanding of what it is you want your AI to do. Dr. Min Wanli, chief machine intelligence scientist at Alibaba Cloud, said it’s essential to build AI around the business case, not the other way around.
“Our approach is business first, technology second,” he said. “First you need to define the problem you want to solve. Then you give the appropriate data to the tech people.”
4. GDPR will be good for AI
It may be as well that AI doesn’t need massive amounts of data since the European Union’s GDPR law is coming into effect in May, which could make it more difficult for companies to hoover up lots of data to feed into AI-powered apps anyway – at least without the consent of consumers. But members of a panel discussion on AI and digital transformation agreed that GDPR won’t negatively impact AI – quite the opposite, in fact.
“I think it’s good because it forces us to use data in a good way,” said Sjöman of Telia. “It governs how we use it.”
“It challenges us to provide real value in exchange for the consent of the customer to use their data,” added Shen-Hsieh of Telefónica.
5. Digital assistants need language expertise
The digital assistant craze has been pioneered by the likes of Amazon, Apple and Google, but it’s something that telcos are also looking at doing. However, it’s harder than it looks, and it’s not simply a matter of how smart the underlying AI technology is – you also need a fundamental understanding of how language works.
Oren Jacob, founder and CEO of PullString, explained in a keynote that voice-activated digital assistants like Alexa or Siri are designed to replicate a human conversation. One facet of conversation is that it’s literally impossible to separate a conversation from the person you’re having it with because of the way language works – which is why the digital assistant has to be a full-fledged character in order to hold up its end of the conversation.
“Your voice app should have a character, a backstory, motivation, tone, mood and style,” Jacob said.
You also have to think about how the assistant reacts under pressure (i.e. it can’t answer a question or says the wrong thing because it hasn’t realized you switched topics). He also said it’s important to develop a DA’s personality to engage and retain the audience, but don’t push it too far or you may polarize or alienate them.
6. AI robots are safe, it’s humans you need to worry about
During a keynote on the topic of AI-powered robots, MOV.AI founder and CEO Limor Schweitzer said that when it comes to making sure robots perform their tasks safely, it’s really just a matter of getting the engineering right.
“Just about anything that can be thought of in terms of safety can be engineered into a robot to prevent it, or it can be engineered into the environment in which the robot operates,” Schweitzer said.
The challenge, he added, is that you can’t predict what humans will do in that environment. “If humans abuse the robot in a way we haven’t predicted, the robot might retaliate.”