AI isn’t ready to tell you what is fake news, so it’s up to lazy humans, sorry

Credit: Uncle Bob / Shutterstock.com

Fake news is big news at the moment. Presumably if D. Trump Esq had lost the US election, he would have made it bigger. Whether or not Facebook’s fake news issue swung or influenced the result, the whole fake news fiasco brings a dose of real life to another very current issue.

Artificial intelligence. Machine learning.

The idea that machines can make those “fake news” judgements now needs to be discussed.

Mark Zuckerberg is probably sick to death of news, fake or real. The real news is reporting that he is dealing in fake news and influencing people’s opinions (as if every newspaper in history hasn’t tried to do that on a daily basis). And he says the problem is much more complex than it seems. In fact, he said, “Facebook has been working on the issue of misinformation for a long time,” calling the problem complex both technically and philosophically.

Technically, the challenge is enormous because there is no black or white, no binary decision-making involved. And, clearly, computers like those black and white problems.

Philosophically, human beings need to judge whether something is fake or just twisted in such a way to make you believe something. (We also need to have the whole “freedom of speech/censorship” debate as well, but let’s not.)

Fake news is propaganda, then – so, not new. The only real difference is that, with social media, warped individuals can launch propaganda campaigns, not just warped regimes.

“History is simply a lie, agreed upon,” said the brilliant French writer Voltaire. Or was it Napoleon? [It was Napoleon. – Ed.] History, it is fair to say, “is the dirty dish water of propaganda, created by the winner.” Or is it?

Or, as Abraham Lincoln famously said, “You should never believe anything you read on the internet.”

You see the problem.

It is very possible, probable even, that one day machines will have as much emotional intelligence as humans, and therefore be as equipped to make the kinds of decisions that Zuckerberg is in trouble for making or not making.

But we are not there yet.

As Marcus Weldon, President of Bell Labs, said at the Great Telco Debate, “At the moment, machine learning is at the point where, based on a conversation with a customer, say, it can only get to a point of concluding the answer might be X, not is actually X.” We have a way to go.

And, some would say, it is simply too dangerous to take a software approach to making these judgments. We cannot risk the gap between now and then, and allow the world to be filled with fake news.

Or can we?

Should it not be the case that humans take the time to weigh up whether news is fake or false themselves? And take responsibility for their own views on it? And what they do with those views? Americans, according to a Pew survey, want news, and just the news – so maybe there is hope.

Whatever the rights and wrongs of the fake news issue, one thing is for sure. Technology platforms like Facebook have the tendency to take away our ability to make sensible judgements. Essentially, it makes us lazy. And therefore technology should not be used to make philosophical judgments.

Yet.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.