Social media sites struggle to stop things like the viral live-streaming of the Christchurch attack because their platforms are designed to do exactly that.
Among the many horrific aspects of Friday’s terrorist attack on two mosques in Christchurch, New Zealand is the fact that the white supremacist who carried out the mass shooting live-streamed the whole thing on Facebook. Unsurprisingly, the video went viral almost immediately on Facebook, Twitter, YouTube and Reddit.
Sadly, this in itself is not news. The New Zealand shooter is hardly the first person to live-stream a murder on social media, let alone leverage social media as a platform to broadcast his message of bigotry and hatred. But it begs the question: why haven’t social media platforms gotten better at spotting and stopping such content before it goes viral?
There are several answers to this, the less obvious one being: actually they have gotten better, relatively speaking. According to Wired, for example, most YouTube videos deemed to violate its T&C are taken down automatically, and 73% of the ones flagged for removal are taken down before anyone gets a chance to view them. Facebook and Google have also developed better tools for spotting things like ISIS beheadings and child porn.
But they’re not perfect, and stuff still gets past the censorbots and the humans employed to spot such content and discern if it violates community standards (a job which, incidentally, is both extremely difficult and highly stressful).
There are a number of reasons for this, the two biggest ones being that (1) social media giants are literally too big to successfully police every bit of content that gets uploaded to their platforms, and (2) those platforms are designed to make content as viral as possible.
And that’s just for images and prerecorded videos. It gets even harder when you throw live-streaming into the mix. It’s not impossible to spot and stop a live video that crosses the line – but it is more difficult. And it can only be detected once it starts. The Christchurch video reportedly ran for 17 minutes before moderators spotted it.
There’s a further complication, which is this: the people in charge of social media companies adhere to a free-speech ethic that may be admirable from a philosophical point of view, but it’s becoming increasingly problematic as extreme content espousing hateful racist ideologies goes increasingly mainstream. Social media may actually be contributing to that via their own algorithms – social media professor Zeynep Tufekci has been researching how YouTube’s recommendation algorithm appears to invite viewers to watch increasingly extreme content as a way to keep them glued to the screen.
The other difficulty in adhering to idealism of the free-speech “marketplace of ideas” is that extremists themselves have become increasingly social-media savvy enough to give themselves a bigger megaphone. Indeed, by all indications, the Christchurch shooter knew exactly what he was doing in terms of using Facebook as a PR platform for his white-supremacist manifesto to ensure maximum reach.
Of all these challenges, the technology part is arguably the easiest to solve. It will take time, but one day – maybe in a few years, maybe in five or ten – AI may be able to detect, delay and stop offensive and illegal livestreams seconds after they start, or possibly even beforehand.
The real problem is that social media business models are intentionally designed to make it as easy as possible to upload and distribute content, including live broadcasts, to as many people as possible – and the people in charge of these platforms have little business or ideological incentive to change that.
That may change as Facebook and others come under increasing pressure from regulators and governments to get their platforms and content under control to combat everything from extremism, bullying and online sex trafficking to fake news and election manipulation.
But there’s no escaping the fact that the in the case of the Christchurch shooter, the social media platforms he used did exactly what they were designed to do. And until the tech giants step up and take responsibility for their own platforms, white supremacists and other terrorists are going to continue to exploit them to similarly devastating and horrifying effect.