Is the ITU’s AI for Good too good to be true? Maybe not!

AI for Good
Meet Sophia, the sort of scary humanoid female torso robot from Hanson Robotics with an AI brain.

The ITU”s AI for Good Global Summit was something altogether different. It’s as if the ITU had had a heart (and brain) transplant.

I was invited to Geneva last week to attend an unusual ITU event. You remember the ITU – it was the center of the telecoms universe before GSM and mobiles took over the world. Apart from determining the telecoms standards we have all adhered to for eons, it also held the biggest industry event every four years in Geneva, now dwarfed by the Mobile World Congress.

You would be forgiven for thinking it had lost its mojo, especially if you had attended any of its Telecom World events in recent years that have been dominated by developing economies promoting themselves via extensive exhibition stands.

However, the AI for Good Global Summit was something altogether different. It’s as if the ITU had had a heart (and brain) transplant by taking on something so ‘non-telco’ and as disruptive and divisive as artificial intelligence.

It was touted as an all-inclusive global dialogue aiming to chart a course for AI that would benefit all of humanity – a lofty goal indeed – and it did attract many of the world’s brightest minds. Humanitarian activists met with industry leaders and academia to discuss how AI could assist global efforts to address poverty, hunger, education, healthcare and the protection of our environment.

In parallel, the event aimed to explore means to ensure the safe, ethical development of AI, whilst protecting against any unintended consequences of advances in AI. The event was co-organized by ITU and the X Prize Foundation, in partnership with 20 other United Nations agencies, and with the participation of more than 70 leading companies and academic and research institutes.

Plenary sessions on the first day of the event gave voice to those leading minds in AI, framing the discussions by offering expert insight into the state of play in AI, the potential of AI to benefit humanity and the best course of action to help ensure that AI fulfils this potential.

Breakthrough sessions followed, inviting participants to collaborate in proposing strategies for the development of AI applications and systems able to promote sustainable living, reduce poverty and deliver citizen-centric public services.

AI makes robots sort of terrifying

For most of us, the mere mention of AI brings on images of connected cars, robots that can think, the personal assistant embedded in our mobile phones and computers, military applications and even something to improve customer experience (as if). And, of course, some of that showy stuff did sneak in but mainly to emphasise a point or show the audience just how far AI has come.

David Hanson from Hanson Robotics took the showmanship award with Sophia, a human-like robotic female torso that could not only hold a sensible conversation with him, but also had all the facial characteristics and expressions of a real person, including smiling, frowning, nodding, emphatic eye-brows and blinking eyes.

It was … well, scary. Really scary. If the idea is for us to feel more comfortable speaking to a humanoid rather than a robot that looks like a robot, it probably didn’t work on me, but as replacement for human teachers and as customer service agents, there might be a case. How “she” could help the world’s impoverished people, I don’t know.

The breakthrough (or breakout) sessions, were supposed to give attendees the opportunity to chip in with their own thoughts to help formulate a set of actions post-event. I attended one on ethics but came away disappointed that the session was dominated by the six or seven instigators on stage that were supposed to trigger discussion, not dominate it. I am told other sessions were far more interactive.

Their objectives were to set the framework for follow-up sessions where guidelines and policies will be developed. This is all admirable stuff for humanity, but AI is being largely driven by business and by members of academia before they get swallowed up by the corporate vampires.

Don’t be evil

The ITU might well be the only body that can carry out such a bold plan, and if this well-executed event is any indication it might just get away with it – with a little help from its friends. Whilst the focus this time was on AI for the good of people, we may need to pay equal attention to the possibility of evil coming from it.

As one speaker noted, emulating the human brain might not be such a good idea. Just look at history to see how many mistakes we have made and continue to make. Similarly, could we trust a machine to make a rational decision using AI every time? What if a speeding connected car is confronted by an accident scene where the only two options are to run into an overturned truck and kill all those on board or veer to the only possible escape route where a crowd of people are standing? It would be tough enough for humans to make that split-second decision, no?

As Apple CEO Tim Cook, said: “I’m not worried about artificial intelligence giving computers the ability to think like humans. I’m more concerned about people thinking like computers, without values or compassion, without concern for consequence.”

And please, let’s avoid getting into the other popular argument that AI will help machines determine good from bad. Could they be any worse at this than humans? I doubt it. Let’s hope this ITU initiative gets the support it needs and moves quickly without getting bogged down in burdensome bureaucracy.

View some highlights of the event below, including Sophia:

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.