The Social Lives of Robots

Sunday, November 14, 2021

What Is It

Machines might surpass humans in terms of computational intelligence, but when it comes to social intelligence, they’re not very sophisticated. They have difficulty reading subtle cues—like body language, eye gaze, or facial expression—that we pick up on automatically. As robots integrate more and more into human life, how will they figure out the codes for appropriate behavior in different contexts? Can social intelligence be learned via an algorithm? And how do we design socially smart robots to be of special assistance to children, older adults, and people with disabilities? Josh and Ray read the room with Elaine Short from Tufts University, co-author of more than 20 papers on human-robot interaction, including "Robot moderation of a collaborative game: Towards socially assistive robotics in group interactions."

Part of our series The Human and the Machine.

Comments (6)


Harold G. Neuman's picture

Harold G. Neuman

Saturday, October 2, 2021 -- 12:19 PM

They need not have social

They need not have social lives. Because they do not think...no matter what anyone tries to tell you. AI is contrivance. Turing would have told you that. Actually, he did. But no one was paying attention.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, October 7, 2021 -- 7:11 PM

Turing said and did many

Turing said and did many things and was ignored for various reasons, but to characterize his view of AI as a contrivance and that no one noticed is like saying Barack Obama was a racist and Donald Trump loved his country and we all missed it. All these statements are valid, but all are equally untrue.

This is a cool little sub-thread, though. Alan Turing won WWII, was one of the greatest philosophers, mathematicians, and human beings of the last century… PT should do a show on him.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Sunday, October 3, 2021 -- 7:34 AM

I think professionals get so

I think professionals get so wrapped in their disciplines, they are not able (willing?) To separate reality: 'how things probably are' from fantasy: ' how they might possibly be'. Sure, I tend to think outside the box and/or jump out of the system on a few things myself. It is good mental gymnastics;helps keep me sharper in advanced years. But, when we talk of notions such as computer social skills---or the lack thereof, we are in the realm of What Does it Matter?, not how things probably are. This is more Asimov than is either practical or pragmatic. Is it useful to speculate on the improbable? I don't think so. Artificial Intelligence is making differences-this much is inarguable. Mental gymnastics are good therapy. And, idle chat IS a social skill, banal as it can often be. Computers are marvelous tools, SKA, algorithims. I would, however, rather play chess with another human being.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Thursday, October 7, 2021 -- 9:13 PM

Exoskeleton tech can get

Exoskeleton tech can get people with paraplegia mobile. Assistive robots have infinite patience for autistic kids to learn from and play with even. Eldercare robots give failing parents peace of mind. Medical rescue robots provide people with epilepsy dignity and timely care. There are way more applications than this for robots. These roles call for social skills and sensitivity to perform well. Philosophers must consider the limits and possibilities.

Advanced neuromorphic chips are in the pipeline to add neural networking units to process the cues that could detect a seizure, log depression, and find avenues to engage a young autistic mind. These and many other tasks can be done and done right. It only takes a few fails for people to lose their trust in tech to solve and address these high-need use cases.

I agree Robots don’t need to be social like humans, but they need to respond to human emotion and traits (some of which humans might not even perceive.) The most dangerous thing a robot could do would be to imitate humans, perhaps. Sometimes you want a robot to act and feel like a human; in general, robots should communicate socially with humans that they are not themselves humans. If that seems odd, it won’t very soon. Robots are becoming more and more human. That, I agree, is a problem.

I've read and agree to abide by the Community Guidelines
Harold G. Neuman's picture

Harold G. Neuman

Wednesday, October 20, 2021 -- 5:02 PM

Tim:

Tim:
I do not often look to film or popular literature for philosophy.. But, I was moved by I Robot. The hi-tech Audi was neat, too.

I've read and agree to abide by the Community Guidelines
Tim Smith's picture

Tim Smith

Wednesday, October 20, 2021 -- 5:55 PM

Harold:

Harold:
"The Imitation Game" is worthy wrt Turing.

I've read and agree to abide by the Community Guidelines