Trust is crucial in shaping human interactions with one another and with robots. We are currently exploring how humans develop trust in robots and how we can use predictive models of trust to enable better human-robot collaboration. Below, you’ll find some of our recent published papers on understanding how humans trust robots in uncertain scenarios, computational trust modeling, and using trust models in robot decision-making.

Multi-Task Trust Transfer in Human-Robot Interaction

Trust-transfer-tasks.png

This work examines how human trust in robot capabilities transfers across multiple tasks. We first present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers.

The findings expand our understanding of trust and inspire new differentiable models of trust evolution and transfer via latent task representations: (i) a rational Bayes model, (ii) a data-driven neural network model, and (iii) a hybrid model that combines the two.

Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human-robot interaction in multi-task settings.

Further Reading: 

Multi-Task Trust Transfer for Human Robot Interaction, Harold Soh, Yaqi Xie, Min Chen, and David Hsu, International Journal of Robotics Research (IJRR), 2019 (Impact Factor: 6.134)
[ Pre-print PDF | IJRR Link ]

The Transfer of Human Trust in Robot Capabilities across Tasks, Harold Soh, Pan Shu, Min Chen, and David Hsu. Robotics Science and Systems (RSS), 2018 ( Finalist for Best Paper Award )
[ Paper PDF | Github ]

Robot Capability and Intention in Trust-based Decisions across Tasks

uav_snapshot

In this paper, we present results from a human- subject study designed to explore two facets of human mental models of robots—inferred capability and intention—and their relationship to overall trust and eventual decisions.

In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.

Further Reading:

Robot Capability and Intention in Trust-based Decisions across Tasks, Xie Yaqi, Indu Prasad, Desmond Ong, David Hsu and Harold SohACM/IEEE Conference on Human Robot Interaction (HRI), 2019. (24% Acceptance Rate)
[ Pre-print PDF ]

Robot Planning with Human Trust Models

TableClearningTrust

This paper introduces a computational model which integrates trust into robot decision-making.

Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants).  The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.

Further Reading:

 Planning with Trust for Human Robot Collaboration, Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa, ACM/IEEE Conference on Human Robot Interaction (HRI), 2018. (23% Acceptance Rate, Finalist for Best Paper Award)
[ Pre-print PDF ]