Trust in AI has almost become a bigger topic than AI itself. We know what AI is (almost), to the point where we do not have to call it artificial intelligence (AI) anymore. The real question, though, is what consumers really think about it and its impact on society.
The question “Do we trust in AI?” is being replaced by: “Do we trust AI in the hands of those who control it?”
The answer, according to global research house YouGov, in a survey covering 19,000 consumers, across 17 markets globally is ‘yes.’
Incredibly, for such a large sample, those who believed that the impact of AI will be ‘more positive than negative’ exactly matched those who said that the impact on society will be ‘more negative than positive’ at 23% each; 40% believe that the impact will be ‘equally positive and negative.’
When it comes to geography and trust in AI, Eastern markets are far more positive than Western markets and the country that is the most enthusiastic about its potential is Indonesia. Consumers in India believe they are the best informed.
A few things stand out in the report but perhaps the most surprising is that trust in AI is not necessarily age dependent. Most technology breakthroughs find favour with the young and scepticism from the old. In fact, there is no clear relationship between age and acceptance. In China, acceptance increases among older generations.
How nervous are we about AI?
The majority of people are nervous and the survey measured three categories – ‘fear of losing control,’ ‘inevitable evolution’ and ‘advancing society.’
In the category where there is a fear of losing control, 42% of people believe that ‘we must pay attention’ so that it ‘doesn’t get out of hand.’ The same percentage are worried about its potential if it gets into the ‘wrong hands.’ 28% and 23% respectively believe that it will either have negative effects that we cannot predict or that it is likely to get beyond human control.
The middle, ‘inevitable evolution’ category (highest score 31% vs 42% in ‘losing control’) was where the answers were about the ‘next step in evolution,’ it will ‘replace some human tasks’ and it will ‘transform some sectors.’
The positive, ‘advancing society’ answers (where the highest score was half of those who were feared losing control) had an equal percentage (21%) believing that AI will transform society for the better in ways we cannot foresee and it will ‘advance society.’
Who should develop AI?
Everyone, except respondents from Singapore, India and the UAE, believes that Government should not develop AI. Instead, consumers believe that Big Tech should develop it, although only 32% believe that, indicating, perhaps, that few people really knows who should develop what is possibly the most transformative technology we have ever developed. This is supported by the fact that only 10% of respondents feel they are well informed on the subject, and trust in AI increases with education.
People’s trust in AI lies on a knife edge between nervousness and confidence. AI will probably end up vastly improving some parts of our world and in ways we cannot yet predict. Already, surprising stories are emerging about how therapists are using AI to understand the most effective approaches to different clients and some believe that AI might help uncover exactly how psychology and psychiatry actually work.
Stories will continue emerge about how AI is taking us to places we thought impossible, in medicine, in industry and in many other parts of our fast-developing world.
The problem when it comes to putting our trust in AI, is that it is a technology and essentially dumb, in that it does what we ask it to do (for now). And that is the problem and the opportunity.
Opinions about AI and its impact are finely balanced, as our world is at the moment. The next year or so will tell us whether the impact of AI will be positive, negative or both.
The report can be downloaded, free, here.