Causal Learnability
Loizos Michael
The ability to predict, or at least recognize, the state of the world that an action brings about, is a central feature of autonomous agents. We propose, herein, a formal framework within which we investigate whether this ability can be autonomously learned. The framework makes explicit certain premises that we contend are central in such a learning task: \cond{i} slow sensors may prevent the sensing of an action's \emph{direct} effects during learning; \cond{ii} predictions need to be made reliably in \emph{future} and novel situations. We initiate in this work a thorough investigation of the conditions under which learning is or is not feasible. Despite the very strong negative learnability results that we obtain, we also identify interesting special cases where learning is feasible and useful.