Reasoning about Preferences in Intelligent Agents
John Thangarajah, James Harland and Simeon Visser
Agent systems based on the BDI paradigm need to make decisions about which plans are used to achieve their goals. Usually the choice of which plan to use to achieve a particular goal is left up to the system to determine. In this paper we show how preferences, which can be set by the user of the system, can be incorporated into the BDI execution process and used to guide the choices made.