Trust is crucial in shaping human interactions with one another and with robots. We are currently exploring how humans develop trust in robots and how we can use predictive models of trust to enable better human-robot collaboration. Below, you’ll find some of our recent published papers on understanding how humans trust robots in uncertain scenarios, computational trust modeling, and using trust models in robot decision-making.
Robot Capability and Intention in Trust-based Decisions across Tasks
In this paper, we present results from a human- subject study designed to explore two facets of human mental models of robots—inferred capability and intention—and their relationship to overall trust and eventual decisions.
In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.
Further Reading: Robot Capability and Intention in Trust-based Decisions across Tasks, Xie Yaqi, Indu Prasad, Desmond Ong, David Hsu and Harold Soh, ACM/IEEE Conference on Human Robot Interaction (HRI), 2019. (24% Acceptance Rate)
[ Pre-print PDF ]
The Transfer of Human Trust in Robot Capabilities
In this work, we investigate how human trust in robot capabilities transfers across tasks. We present a human-subjects study of two task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers.
Our findings lead to a functional view of trust and two novel predictive models—a recurrent neural network architecture and a Bayesian Gaussian process—that capture trust evolution and transfer via latent task representations. Experiments show that the two proposed models outperform existing approaches when predicting trust across unseen tasks and participants. These results indicate that (i) a task-dependent functional trust model captures human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-based robot decision-making for fluent human-robot interaction. In particular, our models can be used to derive robot policies that mitigate under-trust or over-trust by human teammates in collaborative multi-task settings.
Further Reading: The Transfer of Human Trust in Robot Capabilities across Tasks, Harold Soh, Pan Shu, Min Chen, and David Hsu. Robotics Science and Systems (RSS), 2018 ( Finalist for Best Paper Award )
[ Paper PDF | Github ]
Robot Planning with Human Trust Models
This paper introduces a computational model which integrates trust into robot decision-making.
Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants). The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.
Further Reading: Planning with Trust for Human Robot Collaboration, Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa, ACM/IEEE Conference on Human Robot Interaction (HRI), 2018. (23% Acceptance Rate, Finalist for Best Paper Award)
[ Paper on arXiv ]