As you no doubt know, Facebook is in trouble – not financially, but politically. The evidence is piling up that Russian-backed organizations bought ads, planted fake news stories and created hundreds of millions of fake accounts to create discord and chaos in the US 2016 presidential election.
You can read the details elsewhere. But the real takeaway is that the Russia ads controversy is just the latest example of social media being exploited and abused with disastrous consequences. Facebook and other major social media platforms have evolved from a simple and convenient way to connect with friends to a divisive outrage factory churning out hoaxes, fake news and propaganda that is not only arguably making people dumber, but is also creating incompatible sociopolitical reality bubbles.
No wonder social media giants are facing wrath of Congressional committees and EU regulators demanding answers.
It’s gotten so bad that The Economist devoted a cover story to examining the problem. And it’s a huge problem for several reasons, not least of which is the market dominance of Facebook, Google and Twitter, The Economist reports:
Social media are a mechanism for capturing, manipulating and consuming attention unlike any other. That in itself means that power over those media—be it the power of ownership, of regulation or of clever hacking—is of immense political importance.
Just what we can do about this is anyone’s guess at the moment. The Economist rightly points out that regulation won’t do much good until we fully understand what’s happening and what can be done to mitigate it. That may take time, because it’s an immensely complex problem. That said, minor fixes like regulations requiring web ads to be truthful are a good start (although that might mean banning political ads completely, but then again, is that really so bad?).
As for Facebook et al fixing the problem from within, well, that depends on how far within they’re willing to go. Artificial intelligence is one possibility – although, as others have pointed out, Facebook’s AI doesn’t seem to be up to the job, at least not yet – but recent reports suggest the real problem lies in corporate culture. In other words, Facebook and Twitter contributed to the overall problem by either being unaware the problem existed, or not understanding why it was their problem to fix.
This certainly seems to be the case with Facebook’s Mark Zuckerberg. For example, when the fake news issue came to light last year, Zuckerberg’s initial response was to cite free-speech/open-internet idealism, and to point out that fake news couldn’t possibly have made a difference in the 2016 US election because it was a very small percentage of total content on Facebook, so how bad could it be?
Pretty bad, as it turns out.
Zuckerberg’s comments are telling – where others saw Facebook and other social media being used to spread false information that at worst may have altered the outcome of the election and at best polarized political opinions in the US even more than they already were, Zuckerberg saw a simple equation.
According to a recent piece on Buzzfeed, that mentality permeates much of Facebook’s corporate culture:
To truly understand how Facebook is responding to its role in the election and the ensuing morass, numerous sources inside and close to the company pointed to its unemotional engineering-driven culture, which they argue is largely guided by a quantitative approach to problems. It’s one that views nearly all content as agnostic, and everything else as a math problem. As that viewpoint has run headfirst into the wall of political reality, complete with congressional inquiries and multiple public mea culpas from its boy king CEO, a crisis of perception now brews.
Another article in the New York Times takes a slightly kinder approach, describing Facebook as a well-intentioned company that has lost control of its creation:
It’s a technology company, not an intelligence agency or an international diplomatic corps. Its engineers are in the business of building apps and selling advertising, not determining what constitutes hate speech in Myanmar. And with two billion users, including 1.3 billion who use it every day, moving ever greater amounts of their social and political activity onto Facebook, it’s possible that the company is simply too big to understand all of the harmful ways people might use its products.
The saga is still developing, of course. Now that regulators and governments are stepping in, Facebook and other social media companies seem chastened enough to realize that they need to look at their businesses differently.
Telecoms companies with ambitions to get into the digital content space should do the same – especially those of you that have ambitions to create platforms that leverage big data, and are considering an advertising model to monetize it. After all, any regulations that may result from all this are likely going to apply to you.
More to the point, with the rest of the telecoms sector shifting to a more customer-centric model where the customer experience is your competitive edge, you don’t want to be thinking of them purely in terms of digital profiles and algorithms. And if/when people and organizations use your platforms in potentially harmful ways, the last thing you want to be doing is appearing in front of a government committee saying, “Look, we’re not responsible – we’re just the platform.”
It didn’t work for Mark Zuckerberg. Odds are it’s not going to work for you, either.