Facial recognition systems + geofencing – enter the world of ‘pre-crime’

facial recognition systems
Image by Trismegist8 | Bigstockphoto

It sounds like something out of Minority Report. Facial recognition systems are being used in conjunction with other, older, technologies to keep tabs on citizens and their whereabouts.

This is according to a story from MIT, via their Technology Review.

Adapted from Covid technology, facial recognition systems in China are being used to make sure that citizens are where they should be. The story tells of a student belonging to a Muslim group getting a tap on her shoulder in the street, being taken to a police station and then being sent to a detention centre for several months. Because she was outside the zone where she was meant to be.

The facial recognition system was combined with some geofencing technology and recognised that she was outside her designated area, so her ‘aura’ on the system turned orange, signalling that there was a possible ‘pre-crime’ in progress.

You could say that this is fine and an efficient way of upholding the law of the land; an effective defence against terrorist attacks and extremism. You could say that it is the most outrageous breach of human rights imaginable.

Both are probably right and you can bet your last dollar that certain regimes around the world already have, or plan to have, this technology as soon as possible. In fact, even Amazon (allegedly) received an delivery of these facial recognition systems from Chinese surveillance company Dahua, so that they could monitor their workers for signs of Covid (obvs).

Facial recognition systems are all very well, but like all technology, it is down to the human leaders, trainers and programmers to decide who gets the benefit of the system. In the case of the Chinese student, the State or the citizen.

Facial recognition is also one of those technologies that is still not fool proof. One such system, again pointed out by MIT, creates a ‘fake’ face when you click. Click again, and it creates another. And another. And another.

Except, if you click enough times, it shows you the real face that it was trained on in the first place. And, of course, we already know that the reverse is also true. Real images of real faces can be switched onto ‘fake’ faces in what can easily become a ‘deep fake’ nightmare.

Facial recognition systems, particularly when combined with other technologies have joined the on-going and increasingly serious and difficult debate about the ethics of where we are going with these cutting edge technologies.

Related article:

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.