Belief Management for High-Level Robot Programs
Alexander Ferrein, Stephan Gspandl, Ingo Pill, Michael Reip, Gerald Steinbauer and Alexander Ferrein
The robot programming and plan language IndiGolog allows for on-line execution of actions and offline projections of programs in dynamic and partly unknown environments. One basic assumption is that the outcomes of primitive and sensing actions are correctly modeled and that the agent is informed about all exogenous events beyond its control. In real-world applications, however, such assumptions do not hold. In fact, an action's outcome is error-prone and sensing results are noisy. In this paper, we present a belief management system in IndiGolog that is able to detect inconsistencies between the robot's modeled belief and what happened in the real world. The system furthermore derives an explanation and maintains a consistent belief. Our main contributions are (1) a belief management system following a history-based diagnosis approach that allows an agent to actively cope with wrong outcomes of primitive and sensing actions and the occurrence of exogenous events the agent is initially not aware of; and (2) an implementation in IndiGolog and experimental results from a delivery domain.