If Facebook can target me why can’t it target evil?

Image credit: igorstevanovic | Shutterstock.com

If Facebook and others can target us with advertisements, seemingly with pinpoint accuracy, in real time, then why can’t they do the same to stop vile, evil, live-streaming content such as the Christchurch massacre? 

Greg Wells, writing for The Sydney Morning Herald, raised the issue when, soon after visiting the home of his daughter’s friend that had just acquired and outdoor playset, he was hammered with ads for exactly the same playset on Instagram and Facebook.

Was it an amazing coincidence or was his location being tracked and linked to the friend’s location and was this correlated to his own Facebook posts showing he had a young daughter, or linking to friends on Facebook, maybe even turning in his phone’s microphone and recording his conversations?

He simply could not believe this was possible, but what if his worst fears were actually realised? We know it’s possible because we read news of privacy breaches constantly, but could the Facebooks of this world really be that clever, or devious, depending which way you look at it?

The question of who is responsible for taking down inappropriate content is just as difficult to answer. A recent documentary program aired on UK TV showing teams of content ‘censors’ working in the Philippines whose sole task was to watch out for content that was deemed inappropriate. Despite being given guidelines, each person featured had slightly different ideas on where to ‘draw the line’ depending on their own personal beliefs and upbringing. It was apparent that those with strong family or religious backgrounds had much less tolerance.

What was more concerning was the effect their jobs were having on them personally. The stress of having to view content that was abhorrent to them, the fear of making a wrong decision and suffering rebuke from their seniors was obviously taking its toll. It also highlighted the fact that there is no such thing as an arbitrary set of rules when people have to make decisions like this. That, and the time it takes for something to happen, make the whole process questionable.

So why can’t the social media giants, who are spending millions on artificial intelligence development to perfect target advertising, be able to apply it to determine what content is inappropriate and block it in real time like they do with ads?

If you are just a little cynical, like Wells was, you would deduce that if there was profit to be made they would be all over it. It seems that regulation of some sort might be the only ‘incentive.’ The program showed how social media giants, when faced with national regulation that threatens to block or has blocked them for content that governments thought inappropriate, were able to ‘filter’ the content for that country. One assumes they were able to apply some logarithms that allowed them to keep the doors, and revenue streams, open.

Critics are quick to point out that regulation could lead to something hardly ever imposed in Western societies in this liberal era – censorship. Type ‘social media censorship’ in Google and you will be stunned to see the arguments both for and against it. Some camps lobby for some form of ‘filtering’ that they claim is not censorship, others adamant that social media be left alone to self-regulate. Others claim that if the public don’t like the content they will not frequent the sites and their demise will be a form of market attrition.

Of course, no answer will be the right one but what, if anything, can be done to stop live streaming of massacres, beheadings, torture and even suicides – all of which have been delivered in recent years. If they cannot be stopped should dedicated channels or private groups be established for this kind of thing that require viewers to register?

It’s not impossible, after all, online banks have implemented some fabulous fintech to securely recognize customers using everything including facial, palm and voice recognition linked to social security or passport authentication.

Of course, I’m dreaming. The horse has already bolted. The internet is an open door for anything that anybody wants to display, distribute and disseminate. Human nature, being what it is, will revolt against any loss of internet freedom or content, no matter how distasteful it might be. 

In view of all of this it’s probably fair to say the least of our problems is targeted advertising.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.