Robots can’t fully regain human trust after 3 mistakes

After making mistakes, robots need to be sure that they have mastered a new task before attempting to repair a human's trust in them, says Connor Esterwood. "If not, they risk losing a human's trust in them in a way that cannot be recovered." (Credit: Pierre Metivier/Flickr)

Humans are less forgiving of robots after they make multiple mistakes—and the trust is difficult to get back, according to a new study.

Similar to human coworkers, robots can make mistakes that violate a human’s trust in them. When mistakes happen, humans often see robots as less trustworthy, which ultimately decreases their trust in them.

The study examines four strategies that might repair and mitigate the negative impacts of these trust violations. These trust strategies are: apologies, denials, explanations, and promises of trustworthiness.

The researchers conducted an experiment where 240 participants worked with a robot coworker to accomplish a task, which sometimes involved the robot making mistakes. The robot violated the participant’s trust and then provided a particular repair strategy.

Results indicated that after three mistakes, none of the repair strategies ever fully repaired trustworthiness.

“By the third violation, strategies used by the robot to fully repair the mistrust never materialized,” says Connor Esterwood, a researcher at the University of Michigan School of Information and lead author of the study in Computers in Human Behavior.

Esterwood and coauthor Lionel Robert, professor of information, also note that this research introduces theories of forgiving, forgetting, informing, and misinforming.

The study results have two implications. Esterwood says researchers must develop more effective repair strategies to help robots better repair trust after these mistakes. Also, robots need to be sure that they have mastered a new task before attempting to repair a human’s trust in them.

“If not, they risk losing a human’s trust in them in a way that cannot be recovered,” Esterwood says.

What do the findings mean for human-human trust repair? Trust is never fully repaired by apologies, denials, explanations, or promises, the researchers say.

“Our study’s results indicate that after three violations and repairs, trust cannot be fully restored, thus supporting the adage ‘three strikes and you’re out,'” Robert says. “In doing so, it presents a possible limit that may exist regarding when trust can be fully restored.”

Even when a robot can do better after making a mistake and adapting after that mistake, it may not be given the opportunity to do better, Esterwood says. Thus, the benefits of robots are lost.

Lionel notes that people may attempt to work around or bypass the robot, reducing their performance. This could lead to performance problems which in turn could lead to them being fired for lack of either performance and/or compliance, he says.

Source: University of Michigan