Trust is crucial in shaping human interactions with one another and with robots. We are currently exploring how humans develop trust in robots and how we can use predictive models of trust to enable better human-robot collaboration.

The Transfer of Human Trust in Robot Capabilities

Trust-transfer-tasks.pngIn this work, we investigate how human trust in robot capabilities transfers across tasks. We present a human-subjects study of two task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. Our findings lead to a functional view of trust and two novel predictive models—a recurrent neural network architecture and a Bayesian Gaussian process—that capture trust evolution and transfer via latent task representations. Experiments show that the two proposed models outperform existing approaches when predicting trust across unseen tasks and participants. These results indicate that (i) a task-dependent functional trust model captures human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-based robot decision-making for fluent human-robot interaction. In particular, our models can be used to derive robot policies that mitigate under-trust or over-trust by human teammates in collaborative multi-task settings.

References:

The Transfer of Human Trust in Robot Capabilities across Tasks, Harold Soh, Pan Shu, Min Chen, and David Hsu. Robotics Science and Systems (RSS), 2018 ( Finalist for Best Paper Award )
[ Paper PDF | Github ]

Robot Planning with Human Trust Models

TableClearningTrustThis paper introduces a computational model which integrates trust into robot decision-making. Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants).  The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.

References:

Planning with Trust for Human Robot Collaboration, Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa, ACM/IEEE Conference on Human Robot Interaction (HRI), 2018. (23% Acceptance Rate, Finalist for Best Paper Award)
[ Paper on arXiv ]