Moral Machine

Adoption of self-driving, Autonomous Vehicles (AVs) promises to dramatically reduce the number of traffic accidents. But some inevitable accidents will require AVs to make tradeoffs about potential risk, such as risk to pedestrians on the road versus risk to the passenger in the car. Even if these "moral dilemmas" are rare, defining algorithms to make such decisions is a challenge, since people may be uncomfortable with the idea of Artificial Intelligence making life-and-death decisions without human oversight. Experimental ethics can help manufacturers and regulators understand the psychological challenges that may undermine trust in driverless cars, and our ability to exercise oversight over their behavior. This may be a necessary pre-condition to the wide adoption of autonomous transportation.

Interactive: Moral Machine web site

Interactive: Results from the Moral Machine experiment

Video: TED Talk, GESF Keynote

Scientific writings

Books

For a fully account of the story behind the Moral Machine Experiment, read Jean-Francois Bonnefon’s excellent book “The Car that Knew Too Much” (MIT Press, 2021).

Videos

DW Documentary about AI

Next
Next

The Face Game