Driverless Cars at the Moral Crossroads

Sunday, December 8, 2019
First Aired: 
Sunday, July 30, 2017

What Is It

Autonomous vehicles are quickly emerging as the next innovation that will change society in radical ways. Champions of this new technology say that driverless cars, which are programed to obey the law and avoid collisions, will be safer than human controlled vehicles. But how do we program these vehicles to act ethically? Should we trust computer programmers to determine the most ethical response to all possible scenarios the vehicle might encounter? And who should be held responsible for the bad − potentially lethal − decisions these cars make? Our hosts take the wheel with Harvard psychologist Joshua Greene, author of "Our Driverless Dilemma: When Should Your Car be Willing to Kill You?"

Recorded live at Cubberley Auditorium on the Stanford campus with support from the Symbolic Systems Program and the McCoy Center for Ethics in Society.

Listening Notes

Live from Cubberley Auditorium at Stanford University, Ken and Laura Maguire, Philosophy Talk director of research, discuss a familiar topic: bad drivers. Between drinking, texting, and standard human error, driving is one of the most dangerous responsibilities that humans are entrusted with every day. But could the dawn of driverless cars controlled by computer algorithms change everything? Sure, computers may be safer drivers than humans on average, but can they care about human life the same way that people can?

Harvard psychology professor Joshua Greene joins Ken and Laura to discuss the moral dilemmas that come with the advent of driverless cars. Josh admits that it is difficult for people to accept handing their capacity for decision-making over to a computer but explains that computerized driving will ultimately lead to a much safer world. Still, there is justified caution about “mechanized morality” – can we trust computers to make morally fraught decisions? Josh explains that from the perspective of neuroscience, moral decisions are just like any others, meaning that they can be programmed into computer algorithms as easily as commanding the computer to turn left or right.

In the last segment, Ken, Laura, and Josh take questions from the audience about difficulties with morality that is mechanized. A lawyer points out that moral questions already have to be quantified every day in various realms from insurance to product design. A student points out that self-driving cars could be biased towards their own passengers, leading to an incongruence between passengers of different socioeconomic class. Other audience members focus on particular issues with autonomous cars, but Josh stresses that regardless of nitpicked exceptions, any sort of driverless technology will require some sort of quantified, programmed moral system. The challenge is simply deciding how that system should be set up.

  • Roving Philosophical Report (Seek to 8:13): Liza Veale visits an autonomous driving research lab at Stanford University to see how they are dealing with the technological and ethical challenges that accompany self-driving cars. While it might be 50 years before all cars on the road are autonomous, cars already are becoming far more automated.
  • Sixty-Second Philosopher (Seek to 45:55): Ian Shoales questions whether people should want driverless cars at all. He points out that the widespread adoption of driverless cars could raise all sorts of unforeseen consequences with traffic, disabilities, public transportation, and the job market for drivers. 
 

 

Transcript

Comments (2)


Gerald Fnord's picture

Gerald Fnord

Sunday, July 30, 2017 -- 10:57 AM

Driverless cars: my will be done

As a moral being and, significantly, one who privileges ratiocination over passion and reflex, I should actively _prefer_ that a car with much faster reaction times and otherwise capable of driving much better than I can would implement my moral decisions than I. I would otherwise risk momentary weakness of the body or spirit's interference with the judgements I would make were I _not_ about to crash…and, preferably, when I'm not in a misanthropic mood.

Harold G. Neuman's picture

Harold G. Neuman

Thursday, December 5, 2019 -- 10:12 AM

In 2017 I was not too worried

In 2017 I was not too worried about driver-less cars. Thinking about it now and looking at other road hazards (people too busy texting or telephoning to pay attention to their own safety; scooters, which have no business on the road in the first place; bicycle riders who refuse to obey traffic laws; etc. etc.), I realize we are approaching transportation entropy: that saturation point where it becomes well nigh impossible to keep track of increasing hazards, while safely operating any vehicle, driver-less or driven. I'm thinking of giving up driving altogether---although it will be an inconvenience, to say the least.