682 RWTH Publication No: 1004593        2024       
TITLE Metareasoning in uncertain environments: a meta-BAMDP framework
AUTHORS Prakhar Godara, Tilman Diego Aléman, Angela J. Yu
ABSTRACT Reasoning may be viewed as an algorithm P that makes a choice of an action a∗ ∈ A, aiming to optimize some outcome. However, executing P itself bears costs (time, energy, limited capacity, etc.) and needs to be considered alongside explicit utility obtained by making the choice in the underlying decision problem. Finding the right P can itself be framed as an optimization problem over the space of reasoning processes P, generally referred to as metareasoning. Conventionally, human metareasoning models assume that the agent knows the transition and reward distributions of the underlying MDP. This paper generalizes such models by proposing a meta Bayes-Adaptive MDP (metaBAMDP) framework to handle metareasoning in environments with unknown reward/transition distributions, which encompasses a far larger and more realistic set of planning problems that humans and AI systems face. As a first step, we apply the framework to Bernoulli bandit tasks. Owing to the meta problem’s complexity, our solutions are necessarily approximate. However, we introduce two novel theorems that significantly enhance the tractability of the problem, enabling stronger approximations that are robust within a range of assumptions grounded in realistic human decisionmaking scenarios. These results offer a resourcerational perspective and a normative framework for understanding human exploration under cognitive constraints, as well as providing experimentally testable predictions about human behavior in Bernoulli Bandit tasks.
KEYWORDS