Will driverless technology someday make human drivers obsolete? Would you be willing to trust your safety to an algorithm? What if you knew that the algorithm might decide to sacrifice your life to save the lives of others?
What Is It
Autonomous vehicles are quickly emerging as the next innovation that will change society in radical ways. Champions of this new technology say that driverless cars, which are programed to obey the law and avoid collisions, will be safer than human controlled vehicles. But how do we program these vehicles to act ethically? Should we trust computer programmers to determine the most ethical response to all possible scenarios the vehicle might encounter? And who should be held responsible for the bad − potentially lethal − decisions these cars make? Our hosts take the wheel with Harvard psychologist Joshua Greene, author of "Our Driverless Dilemma: When Should Your Car be Willing to Kill You?"
Live from Cubberley Auditorium at Stanford University, Ken and Laura Maguire, Philosophy Talk director of research, discuss a familiar topic: bad drivers. Between drinking, texting, and standard human error, driving is one of the most dangerous responsibilities that humans are entrusted with every day. But could the dawn of driverless cars controlled by computer algorithms change everything? Sure, computers may be safer drivers than humans on average, but can they care about human life the same way that people can?
Harvard psychology professor Joshua Greene joins Ken and Laura to discuss the moral dilemmas that come with the advent of driverless cars. Josh admits that it is difficult for people to accept handing their capacity for decision-making over to a computer but explains that computerized driving will ultimately lead to a much safer world. Still, there is justified caution about “mechanized morality” – can we trust computers to make morally fraught decisions? Josh explains that from the perspective of neuroscience, moral decisions are just like any others, meaning that they can be programmed into computer algorithms as easily as commanding the computer to turn left or right.
In the last segment, Ken, Laura, and Josh take questions from the audience about difficulties with morality that is mechanized. A lawyer points out that moral questions already have to be quantified every day in various realms from insurance to product design. A student points out that self-driving cars could be biased towards their own passengers, leading to an incongruence between passengers of different socioeconomic class. Other audience members focus on particular issues with autonomous cars, but Josh stresses that regardless of nitpicked exceptions, any sort of driverless technology will require some sort of quantified, programmed moral system. The challenge is simply deciding how that system should be set up.
- Roving Philosophical Report (Seek to 8:13): Liza Veale visits an autonomous driving research lab at Stanford University to see how they are dealing with the technological and ethical challenges that accompany self-driving cars. While it might be 50 years before all cars on the road are autonomous, cars already are becoming far more automated.
- Sixty-Second Philosopher (Seek to 45:55): Ian Shoales questions whether people should want driverless cars at all. He points out that the widespread adoption of driverless cars could raise all sorts of unforeseen consequences with traffic, disabilities, public transportation, and the job market for drivers.