As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions.
Could robots ever have feelings that we could hurt? Should we hold them responsible for their actions? Or would that just be a way to let humans off the hook?
This week, we’re asking “Could Robots Be Persons?” It’s the third and final episode in our series, The Human and the Machine, generously sponsored by the Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Before answering the question whether robots could ever be persons, we might want to ask why we would even want them to be persons in the first place. It’s not that robots don’t have many important uses—they're great for assembling cars in a factory, at being precision tools for surgeons, and things like that—but they can do all those jobs without having personalities or feelings.
Granted, AI is getting smarter and more sophisticated all the time. Current robots can already make decisions without human input, they can autonomously explore and learn from their environments. They can perform intellectual tasks we once thought were impossible: recognizing pictures, holding sophisticated conversations with us, and defeating the best human chess player.
But that’s not to say that they’re like us in any important sense. Sure, they can imitate us and our actions, but ultimately, they’re just clever machines. We have beliefs and desires, we feel pain, we can be punished for our choices. None of that is true for robots. You can’t hurt their feelings or frustrate their desires—they don’t have any to begin with.
Those are the robots we have now. Perhaps future robots will be more like us. Perhaps someday scientists will build a robot with real feelings and emotions, not just imitation ones. Whether or not that’s even possible or just the stuff of sci-fi, it seems like it would be a bad idea. Imagine your future housekeeping robot starts to hate its job, refuses to do any cleaning, and instead decides to watch TV all day! That would defeat the entire point of having a robot in the first place.
More seriously, if we built conscious machines, robots with feelings and emotions, then we’d be building something that could suffer terribly, and that seems morally wrong. Some might suggest it’s no different than having a child, which is, after all, also creating a conscious being capable of suffering. The big difference between children and robots, however, is that robots are products created by us to use. It’s fine to build products, and it’s fine to make new people, but nothing should be both a person and a product. To treat a person like a product would be cruel, dehumanizing, and unjust.
Apart from the question of suffering, creating products that have the legal or moral status of a person would mean having to hold them accountable for their actions. And how exactly would we do that? Take away their screen time? Send them to their shipping container for a time-out?
I suppose these future sentient robots we’re imagining would have beliefs and desires of their own, so if we wanted to punish them for their misdeeds, then we would have to take away things they want. But designing products that have actual desires—and that could have those desires frustrated—seems like a dangerous proposal. What if their desires ultimately involved enslaving humans?
Given how quickly AI and robotics are developing, we need to think carefully about these kinds of questions. If creating a robot with feelings and emotions is a bad idea, how do we make sure it never happens? Where do we draw the line between a really complicated artifact and an actual person? And what are the dangers of treating products like persons?
Our guest this week is Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin. In addition to her research on intelligence, human-robot interaction, and the ethics of AI, Joanna consults EU lawmakers on how to regulate digital technology and protect people from potentially harmful AI.
Join us for what will be a fascinating conversation!