Sunday, August 17, 2014

A Need for Robot Etiquette

          
Hey, wise-guy and show-off robots, news flash: Humans would like you a lot more if you didn’t act like you knew everything all the time. A study conducted by Cornell and Carnegie Mellon scientists found that people respond to robots better if they act less authoritative and use meaningless filler words to soften their communication. “People use these strategies even when they know exactly,” Susan Fussell, associate professor of communication at Cornell University, wrote in a statement. “It comes off more polite.” 

Polite Robot Overlords Will Be More Persuasive 

Baking cupcakes can be as much a matter of social interaction as it is a mechanical exercise. Never is this more true than when your kitchen partner is a robot. Their always-right, ego-deflating advice can be off-putting, reports social psychologist Sara Kiesler and her colleagues from Carnegie Mellon University, in Pittsburgh. But having them employ a different type of rhetoric could help soften the blow.

In one study, Kiesler’s former student Cristen Torrey, now at Adobe, observed how expert bakers shared advice with less-experienced volunteers. She recorded the interactions and extracted a few different approaches the experts used. For instance, “likable people equivocate when they are giving help,” Kiesler says. That is, they say things such as “Maybe you can try X” rather than simply “Do X.” They also soften their advice with extraneous words such as “Well, so, you can try X.”

So Torrey filmed a few of her own scenarios in which either robots or people shared advice with actors pretending to learn how to bake, using various combinations of the language the experts used. Then she asked a new group of volunteers to watch the videos and rate how likable, controlling, and competent the experts were. They found that equivocation, or hedging, made the experts appear more competent, less controlling, and more likable. The effect was even stronger for the robots, suggesting that people find robots less threatening than humans when the robots use humanlike language. Kiesler presented some of these results on 4 March at the ACM/IEEE International Conference on Human-Robot Interaction, in Tokyo.

“I think this is quite important, and most people who come maybe from a more engineering perspective, or computer science or technical perspective, are not paying attention to that,” says computer scientist Kerstin Dautenhahn of the University of Hertfordshire, in England. Dautenhahn works with experimental companion robots designed to help older people with daily tasks. “People don’t normally ask the question, ‘What can we do to make elderly people want to be helped?’ ” she says.

Part of why the volunteers accepted the robots’ advice might be that they were less worried about incurring an obligation or harming their own reputations when interacting with nonhuman helpers. That’s not to say that people can’t build up a relationship characterized by increasingly sophisticated expectations with robots over time. In fact, another of Kiesler’s students, Min Kyung Lee, found that they do. She ran a study with a robot called Snackbot. The 1.4-meter-tall white-and-orange-trimmed robot rolled around an office with a tray of goodies, speaking with office workers and offering them snacks.

Her study sought to understand what factors help a robot build a rapport with humans. Just as in human relationships, she found that variety helped. With half the workers, Snackbot referred to previous encounters, building up a shared social history. Three-quarters of the human participants reported that they liked the pseudosocial interactions. Lee calls that result, which she presented at the same conference last year, her most exciting, since other human-robot interaction studies have found that most people grow bored with robots that repeat themselves. The research was supported by the National Science Foundation.

That’s useful information for coaches, dieticians, and designers of future robotic advisors, such as Autom. Such robots will need to learn to listen well, for starters. At first, Snackbot’s real-world voice recognition was so bad its operators ended up piping its audio to a remote operator, who selected responses from a prerecorded menu of statements. But today artificial intelligence is good enough to allow a robot to interact more independently with humans, says Dautenhahn. Those robots will need to borrow tricks on cultivating relationships and being polite from humans. Kiesler says her next step is to personalize how the robot interacts with each individual, much as an intern might learn to behave differently around a cheerful mentor or a grumpy boss.

Another important step will be to train people to have realistic expectations for their robots. Kiesler says, “It’s always been surprising to me how people assume when a robot speaks that it also thinks.”— Written by
Lucas Laursen for IEEE Spectrum, August, 2014


Submitted with permission by Site Contributor, Demita Usher  of Social Graces and Savoir Faire

🍽️Etiquette Enthusiast, Maura J. Graber, is the Site Editor for the Etiquipedia© Etiquette Encyclopedia



No comments:

Post a Comment

Note: Only a member of this blog may post a comment.