Over the past few years, the MIT-hosted "Moral Machine" study has surveyed public preferences regarding how artificial intelligence applications should behave in various settings.

One conclusion from the data is that when an autonomous vehicle (AV) encounters a life-or-death scenario, how one thinks it should respond depends largely on where one is from, and what one knows about the pedestrians or passengers involved.

For example, in an AV version of the classic "trolley problem," some might prefer that the car strike a convicted murderer before harming others, or that it hit a senior citizen before a child. Still others might argue that the AV should simply roll the dice so as to avoid data-driven discrimination.