Facebook is in trouble again – this time over the news that a voter-profiling company called Cambridge Analytica managed to get its hands on the personal data of 50 million Facebook users, and then use that data to help President Donald Trump’s 2016 campaign.
Recode has a good summary here of what happened The basics: in 2015, around 270,000 or so people downloaded the “thisisyourdigitallife” personality quiz app, which harvested data not just from those users, but also from their Facebook friends. The app developer, Cambridge University professor Dr. Aleksandr Kogan, later handed all that data to Cambridge Analytica.
Much of the furor is naturally focused on the political implications, but it also raises serious questions over how Facebook handles user data. Again.
This is not the first time Facebook has faced such questions about data collection and privacy, as well as how that data is used by third parties. And not for the first time, Facebook’s initial response has been: we didn’t do anything wrong.
Facebook’s defense of the Cambridge Analytica scandal hinges on three points: (1) the data wasn’t stolen, (2) the data was collected with the consent of users and (3) it punished Cambridge Analytica for violating its terms of service. (It also punished Chris Wylie, the whistleblower who broke the story to the media, by deleting his account.)
It’s fair to say the data wasn’t stolen, although Facebook seems to be using that argument to imply that its sole responsibility to manage user data is to make sure no one steals it. However, the consent argument is rather disingenuous – you can say the people who downloaded the app and logged in via Facebook consented to data collection, even if only via the T&C they may or may not have read. You can’t really say that about their Facebook friends whose data was also collected via that app. And I don’t imagine any of them consented to their data being given to Cambridge Analytica to allegedly influence an election.
Also, punishing Cambridge Analytica – several years after the fact – is besides the point. Apart from the fact that it highlights a data protection policy that punishes rather than prevents misuse of data, it glosses over the fact that Facebook reportedly found out about this in 2015, and all it did was ask Cambridge Analytica to delete the data. Which the firm reportedly didn’t do – it merely ticked a box on a form saying it did. Facebook didn’t follow up, and it didn’t bother to notify any of the 50 million users affected.
The story is still unfolding, so it’s hard to say what’s next. While some may predict that this finally spells the end for Facebook, I’m not convinced of that. We’ve heard that too many times before whenever Facebook gets caught in some kind of data privacy shenanigans, or even its ongoing problem with fake news and the exploitation therein.
If nothing else, Facebook is simply too entrenched in the digital landscape. I know quite a few high-profile people have quit Facebook over this, and they may take their fans with them. But as Zeynep Tufekci writes in the New York Times, in many countries quitting Facebook isn’t a realistic option for them:
In many countries, Facebook and its products simply are the internet. Some employers and landlords demand to see Facebook profiles, and there are increasingly vast swaths of public and civic life — from volunteer groups to political campaigns to marches and protests — that are accessible or organized only via Facebook.
For my money, however, the issue goes well beyond Facebook and Cambridge Analytica. It’s no secret that Facebook is essentially in the data collection and distribution business. That’s the business model. We’ve known this for years. The real issue is that Facebook isn’t the only company collecting data as a business model. And Cambridge Analytica is probably not the only company who has misused that data. It almost certainly won’t be the last. Even if Facebook collapses, the business model will live on via other companies.
We probably need a plan for that.
Best practices and better transparency are a good start, of course – certainly Facebook needs to seriously rethink how it monitors third-party usage of its data and informs users about improper usage. In fact, it’s probably in Facebook’s best interest to take the lead on serious reform on data collection and privacy, since they’re one of the biggest data collectors on the planet.
Some privacy experts have pointed out that strong regulations like the upcoming General Data Protection Regulation (GDPR) in the EU could also be effective in cases like this – indeed, reports Reuters, the Facebook/Cambridge Analytica incident is the sort of thing GDPR was crafted to regulate in the first place:
The danger faced by Facebook going forward is two-fold: Complying with the rules means letting European users opt out of the highly targeted online ads that have made Facebook a money machine. Violating GDPR mandates could subject the California company to fines of up to 4% of annual revenues.
Had the Cambridge Analytica incident happened after GDPR becomes law on May 25, it “would have cost Facebook 4% of their global revenue”, said Austrian privacy campaigner and Facebook critic Max Schrems. Because a UK company was involved and because at least some of the people whose data was misused were almost certainly European, GDPR would have applied.