Driverless Cars, also known as autonomous vehicles, are rapidly moving from science fiction to reality. These self-driving vehicles promise to revolutionize transportation, but they also raise complex ethical questions, particularly concerning safety. Imagine a scenario where a driverless car faces an unavoidable accident: should it prioritize the safety of its passengers or minimize harm to pedestrians, even if it means sacrificing the occupants?
A recent study, co-authored by an MIT professor, delves into this moral minefield, revealing a significant public dilemma regarding the safety programming of autonomous vehicles. The research highlights a fundamental inconsistency in public opinion: while people generally favor a utilitarian approach, where driverless cars should minimize overall casualties, they become significantly less supportive when considering their own personal safety.
The study, conducted through a series of surveys last year, uncovered that individuals largely agree with the principle of utilitarianism when it comes to autonomous vehicle ethics. In dangerous situations, respondents preferred driverless cars to be programmed to minimize the total number of injuries or deaths. For example, the majority would want a car with a single occupant to veer off course and crash to avoid hitting a group of ten pedestrians. However, this endorsement of utilitarianism clashes sharply with personal self-interest. The same respondents expressed a strong reluctance to actually use or own a driverless car programmed with such self-sacrificing logic.
This creates a stark paradox: people desire pedestrian-friendly driverless cars in theory, but prioritize maximum protection for themselves and their passengers when it comes to the vehicles they might use. As Iyad Rahwan, an associate professor at the MIT Media Lab and a co-author of the study, explains, “Most people want to live in a world where cars will minimize casualties, but everybody wants their own car to protect them at all costs.”
This inherent conflict leads to what the researchers term a “social dilemma.” If everyone prioritizes personal safety above all else when choosing or programming driverless cars, the collective outcome could be less safe for society as a whole. “If everybody does that, then we would end up in a tragedy whereby the cars will not minimize casualties,” Rahwan warns. The study poignantly concludes, “For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest” in autonomous vehicles.
The research, titled “The social dilemma of autonomous vehicles,” was published in the prestigious journal Science. The team comprised Jean-Francois Bonnefon from the Toulouse School of Economics, Azim Shariff, a psychology professor at the University of Oregon, and Rahwan from MIT.
Inside the Surveys: Unpacking Public Sentiment
To gauge public opinion, the researchers conducted six distinct surveys between June and November 2015, utilizing the online platform Mechanical Turk. The results consistently pointed towards a public embrace of utilitarian ethics in the context of autonomous vehicles, emphasizing the importance of saving the greatest number of lives. Notably, a significant 76 percent of survey participants deemed it morally preferable for a driverless car to sacrifice its passenger to save ten pedestrians in an unavoidable accident scenario.
Despite this apparent moral consensus, the surveys simultaneously revealed a significant lack of enthusiasm for personally adopting such utilitarian driverless cars. When asked to rate the morality of a self-sacrificing autonomous vehicle designed to protect pedestrians even at the cost of its occupant’s life, the positive rating decreased by a third when respondents were asked to imagine themselves riding in such a vehicle.
Furthermore, the idea of government regulation mandating utilitarian programming for driverless cars met with strong resistance. Respondents indicated they would be only about one-third as likely to purchase a vehicle regulated to prioritize utilitarian principles compared to an unregulated vehicle, which could presumably be programmed to prioritize passenger safety above all else.
The researchers emphasize the critical implications of these findings for both automakers and regulatory bodies. They argue that this inherent public conflict must be addressed proactively. Paradoxically, if concerns surrounding these ethical dilemmas and regulations delay the widespread adoption of driverless cars, even if these vehicles are statistically safer than human-driven cars, it “may paradoxically increase casualties by postponing the adoption of a safer technology.”
Expert Commentary: A True Social Dilemma
The study’s findings have resonated with experts in the field of ethics and psychology. Joshua Greene, a psychology professor at Harvard University, in a commentary for Science, affirms the researchers’ characterization of the situation as a “social dilemma.” He highlights that “The critical feature of a social dilemma is a tension between self-interest and collective interest,” and notes that the study effectively demonstrates the public’s “deep ambivalence about this question.”
The research team acknowledges that public opinion on autonomous vehicle ethics is still in its nascent stages. They recognize that current findings are not definitive and may evolve as driverless car technology and public discourse progress. However, Rahwan concludes, “I think it was important to not just have a theoretical discussion of this, but to actually have an empirically informed discussion.” This study serves as a crucial step in understanding the complex ethical and societal challenges posed by the advent of driverless cars, paving the way for more informed and nuanced conversations about their future.
Reference:
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573-1576.