Machines might surpass humans in terms of computational intelligence, but when it comes to social intelligence, they’re not very sophisticated.
Would you like a robot to assist you with tasks around the home? What kinds of jobs would you be comfortable leaving a robot to do? Would you trust one to take care of your child or an elderly parent?
This week, it’s the first episode in our new series, The Human and the Machine, generously sponsored by the Stanford Institute for Human-Centered Artificial Intelligence (HAI). We’re kicking off the series with this episode called “The Social Lives of Robots.”
With that title, you might wonder if we’re talking about robots hanging out with other robots, going to robot dinner parties, having robot pals, and so on. But that’s not what this episode is about. We’re thinking about how to develop robots that are socially intelligent.
As robots interact more and more with humans and learn how best to assist us, they need to learn how to do things like read social cues, such as figuring out what we’re looking at and what our facial expressions mean. And they need to learn how to behave so that we feel comfortable and not creeped out interacting with them. That means that they need to develop social intelligence.
Computers and AI have incredible data processing abilities. They can do all sorts of things, like calculate pi to 1,000 digits in a fraction of a second, and let you communicate with people all across the globe with a mere click of a button. We use them to do things fast and efficiently that would be slow and cumbersome if we had to do all the work ourselves. So why do we also need them to be social?
But we’re not talking about computers and AI in general—we're talking about robots in particular. And robots have a kind of body, they move around in space. So they need to be able to perceive and navigate different kinds of environments, and they need to be able to figure out how to behave themselves in those different environments. As we are fundamentally social creatures, a robot that is navigating human environments will need some social skills.
Here’s the problem, though. While robots are unlike your laptop in the sense that your laptop isn’t designed to move around and act autonomously, they’re just like laptops in the sense that they share the same kind of intelligence—computational. Their behavior is just as much a result of ones and zeros as your laptop’s. And it’s unclear how to get social intelligence out of computational intelligence.
Our social intelligence doesn’t come from manipulating numbers and computing data. We have natural resonance systems that allow us to have empathy, to perceive meaning in facial expressions, to follow each other’s eye gaze, etc. A newborn baby, with no ability to crunch numbers, still has more social intelligence than a robot. So how do we get one kind of intelligence out of a fundamentally different kind?
Clearly this is a problem for machine learning. Engineers and computer scientists have to develop algorithms that allow robots to model and mimic human behavior, and learn how to keep improving and adjusting their behavior to human needs. This seems like a very challenging problem. And even if socially intelligent robots are developed, they’re going to be very different from us. Whatever “intelligence” they have will be, after all, artificial.
You might wonder if we should really bother with this. We already have robots doing various tasks, like helping in the operating room or the assembly line. That’s what they’re really good at. Would we really want robots to be nurses and teachers too? Surely, those jobs should be left to real people!
I understand the perennial worry about robots taking away human jobs, but I think robots could be assistants to human teachers and robots, not replacements. Consider the work of a nurse, which can often be physically demanding. They have to do things like help people sit up or get in and out of bed. Now imagine nurses having robot assistants to help with all the heavy lifting, while they do all the things where the human touch is important.
Of course, if all the robot is doing is heavy lifting, you might wonder why it needs social intelligence at all. Why not just work on designing robots that are really good at lifting people in and out of bed and forget about trying to develop their social skills?
Helping physically incapacitated people is not like building cars on an assembly line. One car is just like that last, and once the robot knows how to assemble one, they just do the same thing over and over. But people are individuals with different needs and desires. Even if all the robot is doing is helping the patient get in and out of bed, it will still need skills that the robot working in a car factory doesn’t need.
When a human nurse is helping a patient sit up and the patient winces, the nurse understands what that means, that the patient is in pain or uncomfortable. If a robot is going to take over this kind of work, it also needs to be able to tell when the patient is uncomfortable, just by looking at the patient’s facial expression. It should be able to adjust immediately to the patient’s needs, just as a real life human nurse would.
In other words, if robots are going to be interacting with us, assisting us with various tasks, they need to be able to anticipate our needs based on things like body language and facial expression, and that’s why they need at least some social intelligence.
But robots are also being designed to do a lot more than this. The primary function of socially-assistive robots is social interaction with humans, and they are being used in all sorts of interesting ways, like helping people with autism learn social behavior skills.
To talk about both the challenges and potential benefits of social robotics, our guest this week is Elaine Short, a computer scientist from Tufts who actually works on designing socially assistive robots. Hopefully Elaine will explain exactly how to get something like social intelligence from a computer algorithm!