Mike Schaekermann bio photo

Mike Schaekermann

Computer Science, Ph.D.
Engineering, B.Sc.
Medicine, State Exam I

Email Google Scholar LinkedIn Github

Today, I would like to draw your attention to a blog post highlighting how self-driving cars will force humans to start thinking about their very basic ethical foundations again [1]. We probably all have read or heard about a plethora of articles pointing out how much more reliable and safe autonomous vehicles have become over the last few months and years. In contrast to that, the article Why Self-Driving Cars Must Be Programmed to Kill [1] reviews a scientific paper [2] posing the question how self-driving cars should be programmed to act in the event of an unavoidable accident.

Special emphasis in both the article and the cited paper is on the question whether the general ethical premise should be to minimize pain in general and the loss of life in particular. Amongst philosophers, such an approach is called utilitarianism [10]. However, in some situations, such a utilitarian calculation could conclude that the death toll is minimized if the car avoids a group of individuals on the street by swerving into a wall and killing the passenger instead.

The article highlights the importance of such ethical questions with respect to acceptance and adoption of self-driving cars in society. The fact that also media from other domains like business [7] or general news [8] report about this pressing topic indicates the relevance of this discussion for the broader public. It is also emphasized in [2] that carmakers will have to target potentially incompatible objectives: being consistent in algorithmic morality, preventing public rejection of the technology and encouraging customers to buy cars.

In the reviewed paper [2], the authors tackle the problem of ethical dilemmas by employing a new technique, located at the intersection of psychology and philosophy, called experimental ethics. They ran a crowdsourcing experiment asking for people’s opinions about specific scenarios in which the lives of one or more pedestrians could be saved if a car were to avoid the pedestrians by driving into a barrier, killing the passengers of the car or a pedestrian. The overall result of this extensive questionnaire was that most people wished others to drive in utilitarian self-driving cars without being ready to buy and use a utilitarian autonomous vehicle themselves - a comprehensible paradoxon.

Image reprinted from paper [2]

Another paper [4] takes a bit of a different approach on the same problem stating that the answer to these types of ethical dilemmas cannot be found by employing humans’ ability of ethical reasoning alone because the assignment of blame to either the human programmer or black-box AI processes might be too complex to answer for human beings alone. The authors of this paper argue that we should make use of complementary AI systems to ensure other AI systems do not act illegally or unethically.

In general, I think the presented article highlights the importance of some of the topics we have also discussed in class like reasoning under uncertainty. For example, the authors of [2] describe the scenario of a car avoiding a crash with a motorcycle by driving into a wall taking into account that the chance of survival is higher for the driver of the car than for the rider of the motorcycle. Another example related to the notion of the expected discounted sum of future rewards, alleged in the same paper, highlights the question whether decisions should be adjusted depending on the age of the passengers, given that children, on average, have a longer time ahead of them than adults.

In addition to that, paper [2] also mentions the notion of randomization in decision making to break ties when it comes to irresolvable ethical dilemmas. This idea partially ties in with optimization approaches like simulated annealing or genetic algorithms where randomization is used as a means to explore the solution space in scenarios where it is near to impossible to find the optimal solution in an acceptable time frame.

In my opinion, the discussion of moral dilemmas in the context of self-driving cars is just the tip of the iceberg. The next obvious cases raising similar questions, in my opinion, will be all other types of vehicles that will be equipped with full autonomy in the near future like airplanes, ships and motorcycles. In fact, I think the discussion is fundamentally relevant to all autonomous agents with a large degree of freedom in making decisions, both human and non-human. One example from the medical domain would be a robot performing a Caesarean operation confronted with a scenario in which any decision it could make will lead to the death of either the mother or the child.

In conclusion, I would like to express my opinion that the social implications of this article are mostly positive because all real-life moral dilemmas will force us to explicitly reason about our underlying ethical foundations both as individual human beings and as society as a whole (referring to the approach of experimental ethics proposed in [2]). Essentially, my assumption is that the need for explicitness fosters conscious reflection about our most basic moral values - and how can it get any more explicit than writing algorithmic instructions for a computer? ;)