
Meta is shutting down – or at least pausing – its facial recognition software by December. In a post, Jerome Pesenti, VP of Artificial Intelligence, cites societal concerns as the main reason. He also says that he believes facial recognition software plays an important role in a variety of narrower use cases.
The decision, said Mr Pesenti, was one that had to weigh the benefits with concerns over privacy.
The company has been using facial recognition software for around a decade and has face templates on a billion people. All of these will be deleted over the next few weeks.
It is an interesting move.
For a company that now has the reputation of being rapacious with our data and putting profit before health, it seems a little spineless. Caring for society and its needs is not something that immediately comes to mind when it comes to Facebook, sorry, Meta.
What does come to mind is that it is in the crosshairs across the world for flouting privacy concerns and making billions from its ‘targeted’ advertising. Perhaps its facial recognition software, in itself, is not a money maker.
What definitely comes to mind is that the company is under intense pressure to transform itself into a squeaky clean organisation and this may be its way of ‘putting people first’ in a very public way. About one third of the platform’s users have opted to use its facial recognition software, to allow identification in posts and photos. But even their face templates are being deleted.
Perhaps the company is hoping for a sympathy vote, as people ask why the US Government, particularly, is being so mean and ruining everyone’s fun.
The conclusion must be that this decision has less to do with the feelings of society and more to do with outwitting The Regulator. It is, surely, no coincidence that Facebook was fined $5 billion two years ago, and its facial recognition software was part of the discussions and decisions around the fine imposed after privacy complaints.
The decision to stop using facial recognition software (however that was arrived at) poses questions about other techniques and software advances that could very easily be put in the same basket.
In fact, the whole, broad and scary issue of AI itself is thrown into question. We have got to the point where AI is ‘improving’ itself and the future of that trajectory must be that it becomes able to make its own decisions to bring about the best possible outcomes. And that may not be good for humans, given the mess we have made.
As Mr Pesenti says, ‘Every new technology brings with it potential for both benefit and concern, and we want to find the right balance’
As with facial recognition software, as with a number of other technologies and combinations of technologies, there are, indeed, concerns. Maybe we should press pause until we have thought things through a little more thoroughly.
Related article:
Be the first to comment