Digital Persons?

07 January 2022

Image by ClaudeAI.uk

 

 

Could robots ever have feelings that we could hurt? Should we hold them responsible for their actions? Or would that just be a way to let humans off the hook? 

 

This week, we’re asking “Could Robots Be Persons?” It’s the third and final episode in our series, The Human and the Machine, generously sponsored by the Stanford Institute for Human-Centered Artificial Intelligence (HAI). 

 

Before answering the question whether robots could ever be persons, we might want to ask why we would even want them to be persons in the first place. It’s not that robots don’t have many important uses—they're great for assembling cars in a factory, at being precision tools for surgeons, and things like that—but they can do all those jobs without having personalities or feelings. 

 

Granted, AI is getting smarter and more sophisticated all the time. Current robots can already make decisions without human input, they can autonomously explore and learn from their environments. They can perform intellectual tasks we once thought were impossible: recognizing pictures, holding sophisticated conversations with us, and defeating the best human chess player.  

 

But that’s not to say that they’re like us in any important sense. Sure, they can imitate us and our actions, but ultimately, they’re just clever machines. We have beliefs and desires, we feel pain, we can be punished for our choices. None of that is true for robots. You can’t hurt their feelings or frustrate their desires—they don’t have any to begin with.

 

Those are the robots we have now. Perhaps future robots will be more like us. Perhaps someday scientists will build a robot with real feelings and emotions, not just imitation ones. Whether or not that’s even possible or just the stuff of sci-fi, it seems like it would be a bad idea. Imagine your future housekeeping robot starts to hate its job, refuses to do any cleaning, and instead decides to watch TV all day! That would defeat the entire point of having a robot in the first place.

 

More seriously, if we built conscious machines, robots with feelings and emotions, then we’d be building something that could suffer terribly, and that seems morally wrong. Some might suggest it’s no different than having a child, which is, after all, also creating a conscious being capable of suffering. The big difference between children and robots, however, is that robots are products created by us to use. It’s fine to build products, and it’s fine to make new people, but nothing should be both a person and a product. To treat a person like a product would be cruel, dehumanizing, and unjust.

 

Apart from the question of suffering, creating products that have the legal or moral status of a person would mean having to hold them accountable for their actions. And how exactly would we do that? Take away their screen time? Send them to their shipping container for a time-out?

 

I suppose these future sentient robots we’re imagining would have beliefs and desires of their own, so if we wanted to punish them for their misdeeds, then we would have to take away things they want. But designing products that have actual desires—and that could have those desires frustrated—seems like a dangerous proposal. What if their desires ultimately involved enslaving humans?  

 

Given how quickly AI and robotics are developing, we need to think carefully about these kinds of questions. If creating a robot with feelings and emotions is a bad idea, how do we make sure it never happens? Where do we draw the line between a really complicated artifact and an actual person? And what are the dangers of treating products like persons?

 

Our guest this week is Joanna Bryson, Professor of Ethics and Technology at the Hertie School of Governance in Berlin. In addition to her research on intelligence, human-robot interaction, and the ethics of AI, Joanna consults EU lawmakers on how to regulate digital technology and protect people from potentially harmful AI. 

 

Join us for what will be a fascinating conversation!

 

Comments (4)


Harold G. Neuman's picture

Harold G. Neuman

Friday, January 14, 2022 -- 3:52 AM

I enjoy speculation. So,

I enjoy speculation. So, considering the opening questions above, I offer the following counterpoints. If we are looking for sentience in AI, that at least suggests we have thought about whether robots could/would/should have feelings. It would therefore entail some awareness, on our part, that those feelings could be hurt. This further implies we would have them feel something like OUR pain. On the question of accountability, it seems to me that, as creators of AA (artificial accountability), we would have no fingers to point, or, to use the old retailer's admonition: 'you broke it, you buy it'. Attempting to use one's creation to let one's self off the hook is, as your colleague might say, absurd. There are, I guess, ethical and moral questions aplenty here. But the creation(s) would not be creating them.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Saturday, January 15, 2022 -- 11:30 AM

Something very odd has

Something very odd has happened in machine learning recently. Not only have technologists been able to mimic pattern recognition and many cognitive processes, but engineers have generated code that can solve problems without much understanding of how the code gets there. Scientists can check the answers, but programmers can’t determine the course of logic or if reason was present.

If one can’t determine why something happened, I’m not sure we should attribute accountability to the person or organization that created the precursor to that something. It should fall on the algorithm that made that decision, even if that same algorithm can’t tell us why it made its choice. Up to this point tracing source algorithms has been challenging, but there is hope with innovation in blockchain programming. Attribution is the primary return on blockchain – not cryptocurrency – after all.

Contrary to the opinion of the European Union or Joanna Bryson, at a certain point (that has likely already occurred), Google, or Microsoft or any government is not to be held accountable for decisions made by their robots or AI (two very different if similar entities – a point made in the show comments.)

No AI should be allowed access to decisions unless there is a demonstrated advantage in the algorithm in the first place. But this isn’t our problem. We have long since delegated decisions to machines with expert-level code to handle situations better than humans ever could. You do this whenever you put your foot on the brake pedal or turn on your phone. The issue is super-intelligent code that can’t be understood retroactively, in real-time, or, as fear-mongering authors suggest, in the future. The wreckage of the future is mighty.

AI promises that it can pay its way, and the surest path for this to happen is to treat it as an adult. It will have to be responsible for its impact on others, and it will have to improve the lives of others to boot.

People make their paths, mistakes, and lives. Maybe my children will pay me back, maybe not. They will bear the scars of my parental mistakes. Leaving scars on our code isn’t going to help. The time has come to treat each batch of machine learning code, each quantum annealer, each Hadamard machine code as a legal and economic person/entity with profits and benefits for all living creatures apart from the people, projects, and corporations that created them. As much as I disagree with the EU attempt to trace liability to corporate entities – I fully endorse the special status of digital personhood. The technology is mature enough to generate profit and return without worrying about human scars and imperfections.

Is there such a thing as a digital person? You and I will never meet, and I am a digital person to you. We are differently-abled creatures. Let’s live, let live and see what our creations can make in our image and their abilities.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Sunday, March 20, 2022 -- 8:52 AM

Got an invitation to complete

Got an invitation to complete a detailed survey on ethics, as might be applied to 'sentient' rescue robots; healthcare 'bots, and the like. This is part of a project at Cal Fullerton. An interesting thought experiment, built around the trolley problem and others. I appreciate speculation nearly as much as knowledge. As with many such exercises, there were putatively no right or wrong answers to the scenarios. As a practical matter, however, there were choices which were more right than others, from a pragmatist view. The answers were either Yes, No, or Maybe, so, depending on one's level of humanity, and/or other factors, the survey would yield a diversity of results. I wish the researcher well and hope the survey is useful.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Thursday, August 18, 2022 -- 8:17 AM

Well, here we are---August of

Well, here we are---August of 2022.. controversy and discussion over AI rages onward. A new creation, Sophia or Sophie depending on which spelling is correct, is avowed to be 'actually alive'. What actually alive means is not entirely clear. The bot has no heartbeat or respiration, far as I have been able to determine. We went through the lambda ordeal and that was a bust---cost one overactive thinker a job. Life in the blog world goes on. One of my blood kin has a wife named Sophia and they have a baby girl child now. All are alive and well. New terms come and go in philosophy and other fields. The latest is curious: authoritarian populism. I do not know what it means yet, but am searching. If it occurs to you that authoritarianism and populism may not fit well together, I think you would be right. Somehow, someone believes authoritarian populism is OK. Maybe so. There is a MULLET competition coming up too...

I've read and agree to abide by the Community Guidelines