Never heard about the ethics of artificial intelligence? Surely you have. I am willing to bet that you have read at least two articles in 2019 on the choices self-driving cars will need to make. Ought they always choose to save the lives of the people sitting in it over pedestrians’ lives? Ought they predict the number of lives lost in all possible scenarios and choose the one that comes with the smallest number of fatalities? What if one of the pedestrians is a mother with a newborn baby? What if one of the passengers is a very old doctor close to finishing her Nobel Prize-worthy research on brain cancer?
Professional ethicists are fiercely debating the decisions such vehicles are supposed to make. These are certainly enormously interesting debates, and under no circumstances do I want to suggest that you should not follow them. However, I would like to suggest that perhaps before trying to agree on all intricacies a self-driving car developer should keep in mind, we should press our governments to apply the minimum standards we have already agreed on - and use human rights law to protect us from the potentially detrimental consequences of technology.
Self-driving cars are not part of our daily reality, but other things already are or will very soon be. For example, my colleague Eva Simon is currently working on the European regulation concerning automated upload filters (you know, the ones that may kill memes). In the coming month, she and I will serve you with articles on the danger the use of artificial intelligence may pose and how automated decision making may affect - or does affect - our lives.NB: I would probably choose the life of the cancer researcher. Do you fiercely disagree? Hold on to your thoughts, there will be an opportunity to discuss such issues on our social media channels.