The way to train bots properly is to become really paranoid

Image credit: koya979 / Shutterstock.com

It is becoming hard to decide whether humans or bots create more frustration, frayed nerves and bad tempers. Right now, as we know, we are at the height of AI-driven bot hype. Press releases pour through our digital door saying things like “AI is being implementing by 25% of customer experience operations”.

Our response is generally, “Please don’t.”

On the one hand, MIT says that more fake news (more frayed nerves and bad tempers) is spread by humans than by bots. That’s because humans will flag up things that go against our view of the world, whereas bots will check the piece of fake news against a database of undesirable terms and images. Nothing more.

On the other hand, bots can cause almost visceral, jump-up-and-down pulling-your-hair-out hatred in customers. Such a one is the new customer service bot from Telstra, called Codi. This has been tagged as the “virtual moron idiot” – among other things – by less than happy punters. As well as the usual examples of the bot not knowing your name come instances of the bot asking you to choose “from the options below” and not producing any. The Sydney Morning Herald picked up the story, with this less than complimentary introduction:

Telstra customers have blasted the launch of a virtual assistant support ‘chatbot’ that fails to answer basic questions, can’t differentiate between names and countries, and is often unable to transfer users to a human agent without multiple attempts.

In some instances, bots can actually hamstring a business. Many of you will enjoy our informative, entertaining daily mailers that keep you up to date with our news and views. Several days ago our mailer did not go out. We investigated and were told that the automated email platform’s content police bot had suspended our account because we had contravened its terms of use.

We asked what we had done, and were given a trouble ticket number. Nothing else.

The only thing we could think of was one of our headlines that read “Broadcom promises not to threaten national security”. Had the content police bot – named, we kid you not, ‘Omnivore’ – looked up ‘national security’ in its database and decided it contravened their terms of use?

Eventually, the daily mailer – one of our main channels of communication – was restored to us. We asked for the reason why it was taken from us in the first place. We were given another trouble ticket.

We obviously can’t tell you the name of the email platform we use – that would be wrong. Suffice it to say if we ever meet that moronic monkey face to face, he/she will quickly learn how disruptive we can be.

No fool like a bot fool

Another problem with the bot/human issue is that humans can not only deceive bots (or at least make their decision trees go haywire), but by doing so we can convince humans that research, for instance, is true, when it is not exactly the case.

The example of Microsoft trialing its bot Tay, and users ‘training’ the bot to spout racist and extremist propaganda, is old but relevant. (And here are the worst of the rants that were captured before Tay was taken offline.)

But we have, it seems, gone a step further. A software company (a rather good subscription-based software company) was seeing how its competitors were doing on one of those TripAdvisor-type sites for business. They were extremely interested to see that some of their competitors were scoring very well, and there were actual customer testimonials to back it all up. Smelling a rat, our software company dug a little deeper and found that many of the ‘customers’ who were providing glowing reports were verifying their expertise in the area with LinkedIn profiles. Further investigation revealed that these people had no connections and very little history or experience to boast of. It reminds you of the social media companies and their users who used to employ people to sit and ‘like’ their posts for days on end. The platform itself said it was nothing to do with them.

The point of the last story is that we (humans) tend to believe research or reviews if they are vetted by bots and appear on ‘trusted’ platforms.

The chaos curve

The thing is that while these current examples are annoying, they are also harmless (as in ‘life and death’ not ‘the mailer didn’t go out’). But even venerable organisations such as Wired are worried about the trajectory of the chaos curve that begins with benign tweaking of fake news and inciting racist rants that clearly amuse a tiny minority of people.

In a recent daily mailer (we are betting they don’t monkey around with off the shelf email platforms), Wired says:

As tech companies rush to power their products with machine learning, they must be able to protect against these sorts of attacks [fooling AI into believing things which aren’t true], which take advantage of AI’s tendency to “hallucinate”.

‘…if we want the future to feature safe self-driving cars, hyper-capable voice assistants, and a successfully moderated internet, it’s crucial that those systems be able to withstand sensory tomfoolery. That may require machine learning researchers to get a little more paranoid.

The issue seems to have become a black and white one, like many before it. In the days when desktop publishing was born it was a case of “let’s do the whole publication on a Mac”. Or, “the technology is not ready, let’s do nothing on the Mac”. The sensible thing was to do some of it on the Mac – the bits a Mac (other computers are available) does best.

If we humans can train bots, then maybe the question becomes: why can’t we (the customer/user/producer) train bots ourselves’? Trying to train bots to answer personalized questions when you have a customer base of millions is insane and likely to end in failure and ridicule. Ah, yes, it already did.

The adage that bots are good at computing and humans are good at emotion and not vice versa is right on the money.

If the age of the app is coming to end, as many now believe, and we are moving to a world where apps sit in the background doing their one thing well, we will begin to see something else between the customer and those apps – an intelligence that you can train and teach to provide you with what you want to know and do, when you want to know and do it. And that intelligence is combined from all those apps we used to visit one at a time, combined with our experience, preferences and prejudices.

To be fair, this is beginning to happen, very slowly, and there are initiatives that are trying to work out how to combine the computing prowess of bots with the emotional prowess of humans.

The answer seems to be that we, the individual human, will provide the best way of taking AI and the bots they drive to the next level, where the computing capabilities of bots and emotional capabilities of humans can actually work together. To achieve intelligent personalization, it seems, we must do it personally.

After all, if you want a thing done properly, you have to do it yourself.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.