Well, they don’t, exactly, but here’s the deal:
From Sapiens, a journal of Anthropology/Everything Human:
Pepper is a white, semi-humanoid robot, about the size of a 6-year-old, made by Tokyo-based SoftBank Robotics. You may have seen him working in a bank or a hotel, or being interviewed by Neil deGrasse Tyson. According to the company, Pepper was designed “to be a genuine day-to-day companion whose number one quality is his ability to perceive emotions.” Pepper uses cameras and sensors to detect a person’s facial expression, tone of voice, body movements, and gaze, and the robot reacts to those—it can talk, gesture, and even dance on wheels.
How does Pepper care?
Pepper and other emotional robots are particularly designed, for example, to make eye contact and study how we gaze back at them. “This actually evokes a feeling that the machine even cares about us,” says Hirofumi Katsuno, White’s colleague at Doshisha University in Kyoto, Japan. “The machine tries to understand me or understand us.” Depending on your cultural perspective, being stared at by a machine might seem like an intrusion or it might evoke a sense of comfort, as if someone is kindly watching over you. “The Rise of Emotional Robots” at Sapiens
Of course, that that’s not true. Pepper is not a human being and is not watching over you. Cameras and sensors are responding to your metabolism and physiology and a programmed response is generated.
Critics of the emotional robotics industry say that the view of emotions (assumed in the industry to be only six) is oversimplified and that the robots are likely to promote stereotypes.
It’s hard to know what else they could do, as they won’t be experiencing any emotions themselves.
The Sapiens article is accompanied by a podcast featuring Jennifer Robertson, a professor of anthropology and of the history of art at the University of Michigan, Ann Arbor, and Hirofumi Katsuno, an associate professor in the department of media, journalism, and communications at Doshisha University, Kyoto.
On a practical note, robots may help to bridge the elder care gap but mainly by doing tasks that a senior can no longer do Denyse O’Leary, “How do robots “care”?” at Mind Matters TodayMore.
See also: Can machines be persons? What would the real effect of legal personhood for machines be? For some, it’s a moral issue: sociologist and futurist James Hughes considers existing rights language to be “often human-racist” and “unethical.”
Google branches out into politics Unfortunately the only political model it would likely know is: One-party state The unchallenged manipulation of search engine results during elections is a new phenomenon made possible by the domination of the internet by a few big players.
3 Replies to “How do emotional robots “care”?”
“Critics of the emotional robotics industry say that the view of emotions (assumed in the industry to be only six) is oversimplified and that the robots are likely to promote stereotypes.”
And this is different from human actors in plays and movies how? And of course politicians and other salesmen.
We react to a well done (and fake) performance by imagining that the actor really is trying to interact with us as a fellow human being who shares our feelings. So in most cases, the person/machine appealing to our emotions is merely a trained faker. Robots are also trained fakers.
My concern is that robots will (within my lifetime) get good enough to actually replace interactions with real people. We’re already socially isolated enough. Then there’s the robots that are being designed to allow safer sex than with actual prostitutes…an interesting future awaits us…
“I can predict the future by assuming that money and male hormones are the driving forces for new technology. Therefore, when virtual reality gets cheaper than dating, society is doomed.”
— Dogbert, October 14 1994