AI ethics should put informed principles over populist fearmongering

google AI
Image credit: stellamc / Shutterstock.com

Google’s declaration of principles for AI is a short but carefully worded text covering the main issues related to the uses of its technology. It is worth reading the document, given that it raises many points about the future and the rules we will need to guide us as AI evolves.

The regulation of artificial intelligence is far from a new subject and is being debated widely – Google, as a leading player in the field, is simply laying out its position after a long process of reflection. The company had been working on the statement of principles for some time, while it continues to work in other areas of AI.

In the wake of revelations about Google’s involvement in Maven, most media have interpreted the company’s statement of principles somewhat simplistically as a promise that its AI won’t be used to develop weapons or in breach of human rights, but it is clear that the document has much more far-reaching intentions.

Weapons are mentioned only briefly, in a section entitled “AI applications we will not pursue”, limited to saying that the company will not help develop “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”. That said, it will continue its work “with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”

The real significance of the statement is that it reflects all areas of AI, and does so in a well-informed, realistic way (without summoning up images of killer robots or superior intelligence able to sweep annoying humans aside).

AI will not be used for everyday purposes for many years. For the moment, much of the discussion is about applications that have more to do with deciding which products are offered to potential customers, pricing policies or detecting fraud, along with an increasing number of applications – all of which may be less exciting than killer robots, but still with major potential to get things wrong.

Among the most relevant points from the Google statement are:

  • “Be socially beneficial”
  • “Avoid creating or reinforcing unfair bias”
  • “Be accountable to people”
  • “Incorporate privacy design principles”
  • “Be made available for uses that accord with these principles” (which implies preventing its use by those who do not respect them)
  • “Uphold high standards of scientific excellence”
  • “Be built and tested for safety”.

These are far more important commitments than whether a given company will develop weapons or not. Many of the problems raised by the rapid rate of technological development come not from the potentially harmful objectives, but from mismanagement, inadequate security and procedural errors in a world where not everybody has the best intentions.

Naiveté is no longer an excuse when it comes to technologies that can be used for harm, and Google reaffirms its commitment to avoiding this – a commitment that goes far beyond “Don’t be evil.” This, of course, does not mean the company won’t make mistakes, but the commitment to submitting to rigorous processes and to trying to avoid them at all costs is important.

Reflection on the ethical principles associated with the development of AI algorithms is important, and needs to take place in a reasoned manner. It makes no sense to involve those who do not understand machine learning and AI to be involved in the drafting of the ethical principles that will govern their future. This particularly applies to politicians, many of whom are not qualified to even comment on – much less legislate on – these issues. Those who do not understand the topic have a responsibility to either learn about it first or stay out of the debate.

It’s one thing for Google to ponder the ethics of AI – it is one of the main players in the area that applying it to all its products and is in the midst of an ambitious training program to teach all of its workforce how to use it. It’s quite another for a government, a supranational body or any other political organization to do so, given that in most cases, knowledge of the subject is at best, superficial, and at worst, zero or alarmist. We’re going to see more and more discussion on this subject, but what interests us most is not the outcome, but the process and the intended consequences.

Asking questions about the future to avoid potentially negative or unwanted consequences can be useful, especially if done with Google’s rigor and discipline. Doing so based on unwarranted fears rooted in science fiction are more likely to get in the way of progress and humanity’s evolution – we need to guard against irrational fears, misinformation, and their close relatives: demagoguery and populism. Laying down meaningful principles about the development of artificial intelligence algorithms will be an important part of how our future plays out. AI is a question of principles, sure, but well-founded principles.

This article was originally published by Data Driven Investor (DDI)

Be the first to comment

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.