Trust is crucial in shaping human interactions with one another and with robots. We are currently exploring how humans develop trust in robots and how we can use predictive models of trust to enable better human-robot collaboration. Below, you’ll find some of our recent published papers on understanding how humans trust robots in uncertain scenarios, computational trust modeling, and using trust models in robot decision-making.

NEW!Check out our 2020 review on Trust in Robots here!

Getting to Know One Another: Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration

In our experiments, the human and the Fetch robot have to work together to complete a shopping task. However, only the human knows the goal shopping list, and is unable to communicate it directly to the robot. Moreover, both agents have imperfect capabilities and are unable to pick up certain items. Using our TICC-POMDP (and associated TICC-MCP solver), the agents undergo intent-capability calibration and update their beliefs of each others’ capabilities over time. Experiments show that our approach leads to more accurate beliefs over intention and capabilities, and in-turn, higher task rewards and trust

In this work, we consider assistive scenarios where the robot is helping the human to accomplish a particular goal, but is unaware of the human’s intent. As such, the robot has to learn the goal through interaction. Unlike a majority of existing work, we address the case where the human and robot have asymmetric capabilities that are unknown to one another. This setting is important as non-expert users may be unaware of the robot’s programming and physical capabilities. We undertake a decision theoretic approach to the problem and contribute a novel POMDP and online solver. Our human subject experiments show that our approach earned higher rewards with actual humans and we find evidence that suggests our method induced higher levels of trust in the robot.

Further Reading:

Getting to Know One Another: Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration
Joshua Lee, Jeffrey Fong, Bing Cai Kok, and Harold Soh
[ Pre-print PDF ] Show BibTeX

    title={Getting to Know One Another: Calibrating Intent, Capabilities and Trust for Human-Robot Collaboration}, 
    author={Joshua Lee and Jeffrey Fong and Bing Cai Kok and and Harold Soh},
    booktitle = {{IEEE/RSJ International Conference on Intelligent Robots and Systems} (Accepted)}, 
    year      = {2020}, 
    month     = {October}}

Multi-Task Trust Transfer in Human-Robot Interaction


This work examines how human trust in robot capabilities transfers across multiple tasks. We first present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers.

The findings expand our understanding of trust and inspire new differentiable models of trust evolution and transfer via latent task representations: (i) a rational Bayes model, (ii) a data-driven neural network model, and (iii) a hybrid model that combines the two.

Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately, and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human-robot interaction in multi-task settings.

Further Reading: 

Multi-Task Trust Transfer for Human Robot Interaction, Harold Soh, Yaqi Xie, Min Chen, and David Hsu, International Journal of Robotics Research (IJRR), 2019 (Impact Factor: 6.134)
[ Pre-print PDF | IJRR Link ]

The Transfer of Human Trust in Robot Capabilities across Tasks, Harold Soh, Pan Shu, Min Chen, and David Hsu. Robotics Science and Systems (RSS), 2018 ( Finalist for Best Paper Award )
[ Paper PDF | Github ]

Robot Capability and Intention in Trust-based Decisions across Tasks


In this paper, we present results from a human- subject study designed to explore two facets of human mental models of robots—inferred capability and intention—and their relationship to overall trust and eventual decisions.

In particular, we examine delegation situations characterized by uncertainty, and explore how inferred capability and intention are applied across different tasks. We develop an online survey where human participants decide whether to delegate control to a simulated UAV agent. Our study shows that human estimations of robot capability and intent correlate strongly with overall self-reported trust. However, overall trust is not independently sufficient to determine whether a human will decide to trust (delegate) a given task to a robot. Instead, our study reveals that estimations of robot intention, capability, and overall trust are integrated when deciding to delegate. From a broader perspective, these results suggest that calibrating overall trust alone is insufficient; to make correct decisions, humans need (and use) multi-faceted mental models when collaborating with robots across multiple contexts.

Further Reading:

Robot Capability and Intention in Trust-based Decisions across Tasks, Xie Yaqi, Indu Prasad, Desmond Ong, David Hsu and Harold SohACM/IEEE Conference on Human Robot Interaction (HRI), 2019. (24% Acceptance Rate)
[ Pre-print PDF ]

Robot Planning with Human Trust Models


This paper introduces a computational model which integrates trust into robot decision-making.

Specifically, we learn from data a partially observable Markov decision process (POMDP) with human trust as a latent variable. The trust-POMDP model provides a principled approach for the robot to (i) infer the trust of a human teammate through interaction, (ii) reason about the effect of its own actions on human behaviors, and (iii) choose actions that maximize team performance over the long term. We validated the model through human subject experiments on a table-clearing task in simulation (201 participants) and with a real robot (20 participants).  The results show that the trust-POMDP improves human-robot team performance in this task. They further suggest that maximizing trust in itself may not improve team performance.

Further Reading:

 Planning with Trust for Human Robot Collaboration, Min Chen, Stefanos Nikolaidis, Harold Soh, David Hsu, and Siddhartha Srinivasa, ACM/IEEE Conference on Human Robot Interaction (HRI), 2018. (23% Acceptance Rate, Finalist for Best Paper Award)
[ Pre-print PDF ]