We’re not asking the right questions about AI ethics

AI ethics
Image credit: sdecoret / Shutterstock.com

Recently I was watching a discussion about the ethical questions of AI –  in other words, how to guarantee AI is more good than bad to human beings and societies. Nowadays many discussions are easily polarized, and this seems to be the case for AI ethics too. There are strong opinions that either people can always use machines (including AI) for their own good, or AI machines will be apocalyptic beasts. Now is the time to evaluate this more deeply.

Some people defend AI, saying it’s not a threat because it’s all basically software code programmed by  humans who can also create rules for behavior in each situation. They can decide and control what kind of ethical decisions machines make, and create iron rules that cannot be broken. These people also often argue that what and what kind of machines will be allowed is a political decision.

The other side of that coin is that humans are exactly the problem – or at least those humans who will create machines to amplify their own power and position in business and society. Slave machines will do the work of people, sex robots will replace human partners, and fighter robots will populate armies. Basically, the logic is that bad people with bad intentions can – and will – do bad things with AI and machines to gain power in the world.

The reality is much more complex. Here are five questions that explore a few aspects of this debate:

  1. Can political processes control what kind of machines and ethics rules are implemented?

It is hard to believe that any political decision can really stop development of AI. History gives us lots of examples – if something is technically possible, someone will implement it sooner or later. There are many motivations to do so – making money, improving business, gaining power or just intellectual curiosity. New solutions and machines will be used for business purposes. Even if governments decide to ban them for private use, they would still be developed for military purposes. Or perhaps criminals and terrorists would develop them for their own purposes. This is not to say that politicians, governments and societies can’t develop rules and laws for machines – the point is that bans and overly-restrictive rules never work.

  1. Is AI technology only bad, or is it just another step of the natural development of human society that has seen our lives improve in many ways?

There have always been people who see development or progress as a threat. Of course, AI machines raise many complex questions, not only in terms of how the machines behave, but also the purposes for which they are developed. They can replace workers and change the distribution of wealth, and these changes can create crises for many individuals. At the same time, we have seen these kinds of changes many times before throughout history, such as the shifts from agricultural societies to industrial societies and then services societies. At the same time, however, all parties must take these issues seriously and work to find solutions for them. This means, for example, finding solutions for wealth distribution (perhaps in the form of new tax and basic income systems), human rights and how each human being can maintain her or his dignity.

  1. Can we program ethical rules for machines so that everything works based on our rules?

It is still unknown if machines will ever develop consciousness. At the very least we can say that if they do, it will be different from human consciousness. In any case, some machines are already becoming so complex that we cannot create simple rules for them to govern how they think and behave. Machines process so much data and learn from it that it’s not possible for us to predict their behavior in each situation, especially when machines are linked to each other and learn from each other too. There is currently work being done to create a kind of ‘moral machine’ inside AI. This can include top-down type categoric rules (e.g. “never do this”) and bottom-up learning from different real-world situations. Nowadays it is thought that these moral machines should be based on a hybrid model of rules and learning. But there are still many complex problems to be solved to get this to work.

  1. Do we even know – and can humans agree on – which ethical rules to implement?

This is one important question that is often ignored in AI and machine ethics discussions. Not all people are ethical – or, put another way, many people have very different ideas of what constitutes ethical behavior. Even from the philosophical point of view, there are very different approaches – e.g. rule-based deontological models or result-oriented utilitarian models. Then there are even more questions, such as how to interpret models in practical situations. When we need to teach ethics and behavior rules for machines, we must first define common principles. But even if we do that, there will be people who will teach different models to machines – for better or worse – just as people do to each other.

  1. Who should take the lead in discussion and decision making on AI ethics?

The simple answer is that everyone must participate and have the right to participate in this process. But the reality is more complex. At the very least there will be a combination of technology, business and political processes. Now, even academic discussion is difficult because it requires competence from many areas, such as moral philosophy, data science and economics – not many people fully understand even one of those areas, let alone all three. An important starting point is to increase awareness and encourage open discussion and systematic thinking around these matters. But how many politicians – for example – have seriously started to think and talk about this?

As we can see, we have many open and unanswered questions even as AI development is underway – and the truly important questions focus on the interaction between AI machines and human beings and the impact on the latter, not just about machines and their behavior. At the discussion I mentioned at the beginning of this article, someone made an interesting point: human beings and machines will probably become more similar over time, but not only because machines will become more like humans – it will also be vice versa. As machines become central in more important roles, people will start to behave more like machines.

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.