ITEM: Facebook admitted this week that it has been hiring third-party contractors to transcribe audio clips from its Messenger service – making it the fifth Big Tech company this year to reveal that humans are listening to online audio conversations.
According to Bloomberg, Facebook said that the contractors were employed to check whether the company’s AI software was interpreting Messenger conversations correctly. It also said the messages were anonymized, and were only taken from Messenger users who elected to have voice chats transcribed. Facebook also said it told its contractors to “pause” human transcription a week prior to the story’s publication.
Facebook isn’t the first company to be caught doing this – earlier this year, we found out that Amazon, Apple and Google have also hired humans to listen to audio clips from their respective virtual assistant platforms (Alexa, Siri and Google Assistant), while Microsoft has confirmed it has been using humans to listen to Skype audio to evaluate its translation feature.
In a sense, the Facebook news isn’t really news partly because using humans to verify AI interpretations makes sense, and partly because Facebook has a long history of questionable privacy policies that no one really knows about until it makes headlines, so it’s not particularly surprising.
That said, it’s remarkable to me that after so many privacy debacles over the years, Facebook and other Big Tech firms still have a tendency to view privacy as something users should proactively seek out and implement themselves rather than a default setting.
Which of course many users don’t do because more often than not, privacy policies aren’t always clear, settings can be difficult to find, and they’re not always easy to follow if you do find them.
Facebook says it only transcribed audio clips of Messenger users who agreed to transcription, but it’s unknown how many of them accidentally consented by ticking the “agree” box on the lengthy T&C without reading it, or were presented with a specific notification asking if they were okay with humans reviewing audio.
Odds are it was the former – the Bloomberg report says Facebook’s privacy guidelines on data usage don’t specifically mention voice clips, and while audio might be implied as part of that data, Facebook doesn’t say anything about humans listening to it – let alone humans who don’t work for Facebook.
The same is true for Amazon, Apple and Google, reports Buzzfeed:
… Amazon, Google, Apple, Microsoft, and Facebook haven’t clearly told consumers what they do with their voice and video information. None of those companies’ data policies state that what we say and do in front of our voice assistants, internet-connected cameras, and messaging apps can be shown to strangers employed by the companies or their contractors.
The issue here isn’t just that Big Tech companies are harvesting user data – most people already know this to some degree. But most people also assume that data is being crunched by machines. Unless you actually work with AI, or at least IT, it wouldn’t necessarily occur to most people that humans would be listening to your VoIP calls and audio text messages – let alone humans who don’t even work for the company that collected your data in the first place.
The fact that the conversations are anonymized doesn’t make it less creepy for a lot of people who cringe at the idea of someone listening in on private (and in some cases intimate) conversations.
It’s easy to say that in a post-GDPR world, consumers should just go ahead and assume that everything they do online is digital fodder for big data algorithms, AI training sessions, advertising or whatever. On the other hand, the main reason GDPR exists in the first place is because enough people didn’t like the idea of resigning themselves to that model. GDPR aims to kickstart the process of giving consumers more of a say in whether their data can be harvested, and what it can and can’t be used for.
That ultimately requires unprecedented transparency and disclosure. It’s been said before, but apparently it needs to be said again: Big Tech companies (and anyone who collects user data, or allows partners to do so) need to make their data privacy policies as open and transparent as possible. That includes making the policies and associated tools be easy to find, easy to read and easy to use. Standards and enforceable privacy regulations with teeth might help things along – and will probably be necessary for it to happen at all.