Accommodating Human Variability in Human-Robot Teams through Theory of Mind
Laura Hiatt, Anthony Harrison and Greg Trafton
This paper has a companion video obtainable from the IJCAI Video Track Chair's website at: For details of this track see: The variability of human behavior during plan execution poses a difficult challenge for human-robot teams. In this paper, we use the concepts of theory of mind to enable robots to account for two sources of human variability during team operation. When faced with an unexpected action by a human teammate, a robot uses a simulation analysis of different hypothetical cognitive models of the human to identify the most likely cause for the human's behavior. This allows the cognitive robot to account for variances both due to different knowledge and beliefs about the world, as well as due to different possible paths the human could take with a given set of knowledge and beliefs. We performed an experiment which showed that cognitive robots equipped with this functionality are viewed as both more natural and intelligent teammates, as compared to both robots who either say nothing when presented with human variability, and robots who simply point out any discrepancies between the human's expected, and actual, behavior. Overall, we conclude that this analysis leads to an effective, general approach for determining what thought process is leading to a human's actions, allowing their robotic teammates to be as effective as possible.