Could Robots Be Persons?

Sunday, January 9, 2022

What Is It

As we approach the advent of autonomous robots, we must decide how we will determine culpability for their actions. Some propose creating a new legal category of “electronic personhood” for any sufficiently advanced robot that can learn and make decisions by itself. But do we really want to assign artificial intelligence legal—or moral—rights and responsibilities? Would it be ethical to produce and sell something with the status of a person in the first place? Does designing machines that look and act like humans lead us to misplace our empathy? Or should we be kind to robots lest we become unkind to our fellow human beings? Josh and Ray do the robot with Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance, and author of "The Artificial Intelligence of the Ethics of Artificial Intelligence: An Introductory Overview for Law and Regulation."

Part of our series The Human and the Machine.

Comments (1)


Tim Smith's picture

Tim Smith

Thursday, December 2, 2021 -- 1:03 PM

If robots can extend empathy,

If robots can extend empathy, learn operantly, and are embodied - I have no issue giving them personhood. Along the way, they also need to pay for themselves, leave no trace and improve the lives of others. If only people were held to the same standards.

I do feel like this level of intelligence and compassion is possible and probable given current insights in deep learning.

I've read and agree to abide by the Community Guidelines