Autonomous vehicles – once the thing of an oh-so-distant-future – are expected to debut as early as next year, with GM and Lyft testing self-driving electronic taxis as early as 2017.
In a study conducted by MIT, researchers concluded six online surveys of 1,928 people between June and November 2015, questioning participants about the moral behavior of autonomous vehicles (AVs). When asked whether AVs should swerve to avoid hitting a group of pedestrians – even if swerving would result in the death of the car’s occupant – respondents mostly said that the greater good was more important than the life of the individual. However, none of the respondents wanted to be in the car at the time that the choice was made, making for a difficult moral quandary. When respondents were asked a variation of the question, in which a family member or child was in the car, they still replied that they would want the car to save the most number of people – but again, not if they or their child were in the car.
Although previous studies have shown that respondents are less likely to buy AVs that are morally pre-programmed to be utilitarian (i.e. serve the greatest good to the greatest number of people), respondents in this study were significant less likely to buy it when they were asked to imagine a situation in which they and a family member were sacrificed for the greater good. “In other words, even though participants still agreed that utilitarian AVs were the most moral, they preferred the self-protective model for themselves.”
According to the surveys, “…even though participants approve of autonomous vehicles that might sacrifice passengers to save others, respondents would prefer not to ride in such vehicles. Respondents would also not approve regulations mandating self-sacrifice, and such regulations would make them less willing to buy an autonomous vehicle.” In general, respondents were found to disapprove of any regulation that would enforce utilitarianism AVs.
The report notes that the interests of the three groups that have a say in how AVs handle ethical dilemmas – consumers, manufacturers and governments – are likely not aligned.
With reports estimating that human error currently contributes to 90% of all traffic accidents, part of the appeal of AVs is in the increased safety potential of having vehicles that are programmed not to crash. Part of the programming would include algorithms that embed moral principles to guide the decisions of AVs in situations of unavoidable harm. According to the study, manufacturers and regulators would need to accomplish “…three potentially incompatible objectives: being consistent, not causing public outrage, and not discouraging buyers.”
The study raised additional questions, including whether AVs should account for the ages of passengers and pedestrians, and if buyers who knowingly buy AVs with self-protective vs. utilitarian algorithms should be held to account in case of accidents. These questions were not posed to respondents.
The report warned that, partially due to questions of ethical ambiguity, moral regulation could delay the adoption of AVs altogether. In the short-term, it appears that there is no way to design AV algorithms to reconcile moral values with personal self-interest – not to mention account for moral attitudes across cultures.
According to a report by McKinsey, which predicts that AVs will be mainstreamed by 2040, AVs will free up to 50 minutes a day for users, improve parking space efficiency, and reduce accidents by up to 90%, potentially saving about US $190 billion in the U.S. alone.