How do emotional robots “care”?
|September 11, 2018||Posted by News under Artificial Intelligence, Culture, Mind|
Well, they don’t, exactly, but here’s the deal:
From Sapiens, a journal of Anthropology/Everything Human:
Pepper is a white, semi-humanoid robot, about the size of a 6-year-old, made by Tokyo-based SoftBank Robotics. You may have seen him working in a bank or a hotel, or being interviewed by Neil deGrasse Tyson. According to the company, Pepper was designed “to be a genuine day-to-day companion whose number one quality is his ability to perceive emotions.” Pepper uses cameras and sensors to detect a person’s facial expression, tone of voice, body movements, and gaze, and the robot reacts to those—it can talk, gesture, and even dance on wheels.
How does Pepper care?
Pepper and other emotional robots are particularly designed, for example, to make eye contact and study how we gaze back at them. “This actually evokes a feeling that the machine even cares about us,” says Hirofumi Katsuno, White’s colleague at Doshisha University in Kyoto, Japan. “The machine tries to understand me or understand us.” Depending on your cultural perspective, being stared at by a machine might seem like an intrusion or it might evoke a sense of comfort, as if someone is kindly watching over you. “The Rise of Emotional Robots” at Sapiens
Of course, that that’s not true. Pepper is not a human being and is not watching over you. Cameras and sensors are responding to your metabolism and physiology and a programmed response is generated.
Critics of the emotional robotics industry say that the view of emotions (assumed in the industry to be only six) is oversimplified and that the robots are likely to promote stereotypes.
It’s hard to know what else they could do, as they won’t be experiencing any emotions themselves.
The Sapiens article is accompanied by a podcast featuring Jennifer Robertson, a professor of anthropology and of the history of art at the University of Michigan, Ann Arbor, and Hirofumi Katsuno, an associate professor in the department of media, journalism, and communications at Doshisha University, Kyoto.
See also: Can machines be persons? What would the real effect of legal personhood for machines be? For some, it’s a moral issue: sociologist and futurist James Hughes considers existing rights language to be “often human-racist” and “unethical.”
Google branches out into politics Unfortunately the only political model it would likely know is: One-party state The unchallenged manipulation of search engine results during elections is a new phenomenon made possible by the domination of the internet by a few big players.