What legal protection do we have from machines and AI?

Image by AndreyPopov | Bigstockphoto

Is it time to ask the most important question on what legal protection we have against machines? This came to mind at a recent event I attended in San Francisco about the risks of generative AI.

One of the speakers started his presentation by asking how many in the audience knew about the British Post Office scandal. I was one of only three people familiar with it, and it is an interesting case that relates to how much we can trust AI.

The British Post Office Scandal 

The Post Office scandal, as it is called locally, had been breaking news for several years in the UK, but it is not well known elsewhere. Between 2000 and 2014, more than 700 Post Office branch managers were handed criminal convictions when accounting software, which worked incorrectly, made it look as though money was missing from their sites. It is said the case against UK Post Office branch managers was Britain’s most widespread miscarriage of justice.

It took years for the Post Office and the vendor to admit the software was not reliable. This caused serious personal consequences for many branch managers, apart from criminal convictions, including loss of livelihood, bankruptcies, divorces, and suicide. It also indicates that faulty functioning equipment seriously affects people’s lives if they cannot defend themselves and the right to prove their innocence if only data from a machine is the grounds for judgment.

Higher security levels

We know all software has some bugs. And it is also normal for fraud detection and similar systems to give false positive alarms. Many of us have received messages from financial services companies to check if a certain transaction has been instigated by us or if an unusual login to a system is done by us. This is normal to achieve a higher security level. At the same, it cannot mean someone is sued and is convicted directly based on this kind of alarm.

Years ago, we worked with one company to automate insurance claim processing. The system was able to handle over 70% of the claims automatically. The rest of the cases needed manual investigation, and some of them also included potential insurance fraud. However, when a machine made a decision to put a case to the manual process, it didn’t automatically mean that someone was attempting to commit fraud.

Layers of risks 

We have heard a lot about potential risks when artificial intelligence drives a car, operates an aeroplane, or controls a factory. These can cause direct physical harm to people if they don’t work properly. But at least in those cases, we normally see that something went wrong. Yet, cases like the Post Office scandal illustrate where, for years, it was thought that the system worked correctly and not necessarily that someone tried to hide problems.

It reminds us of how serious consequences this kind of malfunction can have. In this case, it was more like errors in the code, but with AI, things are different. It is about checking the code, testing its functions, and auditing how these models work, especially when they adapt over time.

Four issues to consider in trusting machines

We have at least four issues to consider regarding how much to trust an artificial intelligence system:

  1. The correctness of the software and how well it is tested,
  2. Have correct data for training and do the training properly,
  3. Audit and monitor how the system works with its use cases,
  4. Guarantee all legal rights to anyone, and remember that every person is innocent until proven guilty, also if an AI system claims something against a person.

We are seeing more and more IT systems, data analytics and artificial intelligence systems that monitor our work, finance transactions, insurance, health care systems and many other important functions of daily life. We need to be able to trust that the system works properly. At the same time, we also need to guarantee a layer of safeguards that can protect us against the machines and their decisions if they go wrong.

Credit scoring is a simple example where a wrong score (for whatever reason) causes harm to a person. But when going to something more significant like scoring of citizens and their trustworthiness, there are even more concerns, although such systems are more an ethical issue. But there are many other systems that can have a large impact on our lives if they work correctly but have been trained with unreliable data or incorrect data about us.

Legal protection from machines and AI

It should be evident that people also need the legal right to protect themselves against machines. There are many examples where people trust machines too much and believe machines cannot make mistakes.

But an even more complex question is how to collect and analyze evidence. In the case of a big artificial intelligence system managed by a tech giant or a government, how does an individual access resources and develop competence to get relevant information to protect themselves and analyze all relevant data? People should have full access to their own data and the capability to use it and have ‘AI defence experts’ to help them in these types of cases. AI development forces us to also think about legal practices and how people can legally fight against machines, now and in the future.

Related article: Article claims AI will turn machines against their makers

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.