Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
computational modeling
bayesian modeling
decision making
representation
reasoning
When facing tasks that are difficult to solve optimally, people can construct simplifying strategies that trade off utility with cost (Ho et al., 2022, Callaway et al., 2022). How we do so is an open question, especially in domains with large, structured strategy spaces where strategy evaluation itself is costly. One proposal is that people select strategies without much online computation, by a process of (reinforcement) learning through experience (Lieder & Griffiths, 2017). We present an alternative, resource-rational metareasoning framework that integrates strategy learning with adaptively bounded amounts of online strategy evaluation. We compare these proposals using a new video game task in which players traverse a grid of moving colored tiles while respecting complex rules about valid color sequences. Players quickly discover simplifying strategies, such as “only step on red tiles,” and adapt when the environment changes to favor new strategies, in ways that are most consistent with adaptive metareasoning.