When Driverless Cars Must Choose

01 August 2017

Will driverless technology someday make human drivers obsolete? Would you be willing to trust your safety to a mere algorithm? What if you knew that the algorithm might decide to sacrifice your life to save the lives of others? These are just some of the questions we discuss in this week’s episode—Driverless Cars at the Moral Crossroads.

We human beings are terrible drivers. Think about it—we text and drive, we drink and drive, we drive while half asleep. Part of me would gladly take a computer over a human behind the wheel almost any day of the week.

On the other hand, I love driving stick shifts. Every car I’ve ever owned except my first death trap of broken down Ford Pinto—remember those?—has been a manual transmission. I love that feeling of being in control. And another part of me wouldn’t want to give that awesome feeling up and put a computer in charge.  

But although I think of myself as a good driver, and love being behind the wheel rather than in the passenger seat, other drivers simply suck. 94% of all accidents are caused by human error. And it’s not just a few bad apples—even though 15% of drivers cause 85% of all accidents. The real problem is that the average human driver is no cup of tea. The average human driver will cause 3 or 4 accidents and have 7 or 8 near misses for every actual accident over the course of their driving careers.

The thing is, though, that at least people care. Computers don’t. They don’t care about you. They don’t care about me. They don’t even care about themselves! Fortunately, we just need to program them to drive as if humans matter to them. But that doesn’t make things any easier, really. Because now we must ask, which humans? Their own passengers? Passengers in other driverless cars? Pedestrians? Cyclists?

Of course, in the abstract, every life matters. But the question is whether they should all matter equally to a self-driving car. I don’t know about you but when I’m behind the wheel, my instinct for self-preservation kicks in. And when somebody I care about is in the passenger seat, I do my best to keep them safe.  

To see where I’m going with this, imagine that you and your loved ones are passengers in a driverless car. A pedestrian suddenly jumps into its path. The car calculates that it can perform either a maneuver that would harm the pedestrian or one that would harm you. Which should it perform?

Engineers will tell you that they are going to do their best to make sure that such scenarios seldom arise in the first place. The cars will constantly scan the road for signs of trouble, and with much better sensors and way better reaction times than humans. And perhaps there is some truth to that, but seldom is still not the same as never. That means we can’t avoid having to decide whether an autonomous car should sometimes be willing to sacrifice its passenger to save a pedestrian. It’s a question that must be confronted.

How we answer that question will depend on the what ethical theory we decide to program into our cars. If it’s programed like a utilitarian, it will decide what to do by calculating the greatest good for the greatest number. This means that in some circumstances the car might decide to sacrifice its passenger to save the lives of others.  

That may sound fine in theory, but I’m not sure it would work in practice. Ask yourself whether you personally would trust your life to a utilitarian car, programmed to treat you as one human among others, with no special concern for your survival?

"But don’t we already do that every time we get into a cab or take an Uber?" someone will ask. Not really. Whatever a human driver does or doesn’t feel about you personally, they’ve still got that instinct for self-preservation. If you knew up front that a cab driver had no such instinct, I’d bet you’d be a little reluctant to get in that cab.

Look, I’m not suggesting that driverless cars be programmed to be suicidal or to treat their passengers like disposable cargo. But I am suggesting that in the absence of the human instinct for self-preservation—which you cannot possibly program out of a well-functioning human, but might not want to program into a self-driving car—it's very much a non-trivial matter to decide exactly what driverless cars should be programmed to prioritize.

It’s tempting to think that maybe people should be able to choose the moral theory of any driverless car they are passengers in—sort like having the option to upgrade to an awesome sound system or get heated seats. You pay a little extra if you want the car to be a bit partial to you and yours and pay even more if you want it to drive like a ruthless getaway car.

Though that may sound good at first, it would obviously lead to moral chaos on the road. What we need to reach consensus about the moral theory we program our cars to obey, before we turn the roadways over to them. No doubt that will be a hotly debated question for years to come.

One thing is certain, whatever moral theory we, as a society, eventually choose for autonomous vehicles, car manufacturers will no doubt do their best to make passengers feel special and catered to. Until they read the fine print… “warning, in case of a dire emergency, this car may decide, on moral grounds, to do harm to you, its passenger…”

Comments (4)


Harold G. Neuman's picture

Harold G. Neuman

Friday, August 4, 2017 -- 12:45 PM

I, too, fancy driving and

I, too, fancy driving and prefer my old stick-shift Subaru to modern variable-speed rigs and other forms of automatic transmission vehicles. That said, I must decry the irresponsibility of multi-tasking while behind the wheel. I think the notion of driverless cars is patently absurd, but, obviously, there are those who believe it will be the next Big Thing. I'm not terribly worried about this because it will not affect me much, no matter what happens. My grandchildren are another matter. It is not (as an attorney or judge might say) yet ripe for determination. The jury is still out.

Tim Smith's picture

Tim Smith

Saturday, December 7, 2019 -- 8:39 AM

This is a refractory concern

This is a refractory concern of automation, artificial intelligence and by the time this post is deleted... artificial consciousness.

What is morality? Why does it concern human minds and not pattern recognizing algorithms. What is so fearful of surrender to such algorithms if the return on investment is overall public and personal safety that far exceeds the moral universe that would be its rightful ruler.

Morality does take precedence over action not for what it is in its essence. We should be careful of such questions that they do not lead us down the wrong road. Driving is a very deep metaphor for thought, mind and body but it is incorrect on whole. There is no homunculus driving your mind as you read this post. There is no algorithm that would decipher morality.

The only thing to fear is fear. If that is trite. Then try this. Freedom is not free. Just as trite but ever so much deeper if only libertarians understood the depth, and I have considered myself one of those.

I choose to let driverless cars do the driving, but I morally choose not to be driven. Do I do anything there? Sometimes inaction is the moral choice. Prudence is different than morality how? I would drive if it weren't yet another vector of disaster. There are so few, too few, moral vectors.

Harold G. Neuman's picture

Harold G. Neuman

Wednesday, December 11, 2019 -- 10:16 AM

Artificial consciousness?

Artificial consciousness? Well, I suppose nothing is impossible. What the mind can conceive, the man can do...maybe. The consciousness enigma is one of my favorite puzzles, so , from time to time, I write something about it. The following is an excerpt from an essay:
...Freud, it seems to me, made a category mistake when he introduced 'unconscious' into his psychoanalytical lexicon. Appearing to grope for a term describing a nether state of consciousness, he ascribed to that word a meaning which had never before been intended. In usual speech, unconscious means an unawareness brought about by a blow to the head (trauma) or a sleep state, as induced under anesthesia. In subsequent years, philosophy professionals have spoken to this muddled notion. In his 1949 work, The Concept of Mind, Gilbert Ryle gave a subtle critique, characterizing it (unconscious) as "a Freudian idiom". Later, John Searle weighed in, saying roughly that anyone using the term unconscious when discussing consciousness did not know what they were talking about. Je ne sais quoi, indeed. None of this is to say that Freud's body of work and achievements were less than estimable. It only shows that creativity, even issuing from great minds, can be misguided... The Rooseveltian admonition about fear was never trite. The only folks who believed it was were, as they are now, unaware of the bigger picture. Ryle said it best: " The solution of a problem is not always a truth or a falsehood."

Tim Smith's picture

Tim Smith

Tuesday, December 17, 2019 -- 4:07 AM

Roosevelt wasn't the

Roosevelt wasn't the progenitor of that fear admonition and it is trite if it is taken that way. Fear and valence are two very different things. The confusion makes it trite to some and a scathing false politic to others. That a bigger picture can make admonition profound... I will have to think very much harder on that.

Popper killed Freud with his own knife of untested recursion. The unconscious lives on well past Freud's death only to be branded non conscious by those who would discuss it openly despite Popper's violence. Eric Kandel's most recent (last?) book gives a very fine funeral to this thought and resurrects Freudian legacy. I don't pretend to hold a candle to Kandel. We need more books, more data, less blogs, less fear mongering manipulating... I get your point. I think.