The following is an abridged version of Aimee van Wynsberghe’s talk on “Ethics as a driver for innovation in robotics & AI” at last week’s AI for Good Global Summit. Van Wynsberghe is co-founder and co-director of the Foundation for Responsible Robotics.
Think of ethics as ‘the good life’ – our picture of the good life.
Living far away from our loved ones, telecommunication technologies help us to bridge this gap. The technology doesn’t necessarily change an element of the good life, but it helps us get to the experience of the good life.
Of course, robotics and AI are no different. They will either change what the good life looks like or how we actually achieve the good life.
How can ethics inform the design and development of robotics and AI?
Robot ethics tries to cluster ethical questions into different kinds of categories that focus on a particular individual or stakeholder as ethical agent.
One cluster of questions relates to the users of the robotic systems. Another cluster relates to the robot or the AI as an ethical engine in and of itself.
How will robots and AI best interact with humans and how will the rise of robotics affect human interaction? Should we confer human values onto machines, giving machines an ethical sensibility, or should ethics remain solely the responsibility of robots’ human masters?
Another cluster of ethical issues or questions – one especially relevant to the AI for Good Global Summit – relates to the designer, the developer, the policymaker: the group of stakeholders responsible for bringing robotics and AI into the world, for making it a reality.
How will we make robots in such a way that they preserve dignity and enhance wellbeing? What capabilities does the robot need to have to meet those kinds of criteria?
What are the standards that we should have for the data used by AI or to train AI?
How should we address issues related to employment, making a link between the risk of job loss and the debate surrounding our ability to make robotics that enhance rather than replace humans in their jobs?
We can’t forget, too, that the media is an important stakeholder in this cluster questions, sometimes creating the rhetoric for how we understand robotics and AI to be good or bad.
What does ‘the good life’ look like?
As part of the design process, let’s imagine the future in 2030. We can use the Sustainable Development Goals (SDGs) as our starting point. We can ask what the ideal society would look like, and use ethics to get there.
We can imagine, in a humanitarian context, the good life to be one in which resources go further. More people can be helped, getting to isolated places that you wouldn’t otherwise have access to.
How can technology fit into this picture?
Many humanitarian NGOs are capitalizing on AI-powered drones, which calls upon us to ask what ‘informed consent’ means in this instance. Informed consent is part of a rich ethical tradition to protect individuals from being used as experimental subjects. We must consider how we can protect these ethical values and embed them into the technology.
Let’s imagine another utopian picture of the future.
We could have a future where we minimize stereotyping and discrimination. In bureaucracies such as the court system, in healthcare, in banking perhaps, and in the policing system – with ethics as our starting point, we could use AI to detect when a bias has occurred or flag or predict high probabilities of certain biases affecting the decision-making of court officials or judges.
We could imagine another scenario in which AI contributes to the protection of the environment, with robots testing water pollution, drones flying by factory settings to monitor air pollution, and robots cleaning up e-waste.
Essentially what I’m trying to do is make ethics the star of these pictures. What’s different about this story, about this picture, is that the good life is the starting point, and the technology follows. The technology is what helps us get there rather than the other way around.
This utopian picture is where AI is used to test for our biases, rather than as a continuation of our biases. Robotics and AI are used to express ethical values rather than degrade or face us with more ethical challenges. Ethics becomes this tool for thinking about how to actually bring these values – this language – to life.
Think about ‘the good life’ as the starting point. Use this language to paint the picture of the utopia you believe is important.
The next question then is: What ethical values do we want to be the defining features of AI in the year 2030?