Host: The Japanese Society for Artificial Intelligence
Name : The 39th Annual Conference of the Japanese Society for Artificial Intelligence
Number : 39
Location : [in Japanese]
Date : May 27, 2025 - May 30, 2025
Model-based reinforcement learning (RL) is a promising approach to learning to control agents in a sample-efficient manner, but often struggles with generalization beyond tasks it was trained on. While previous work have explored using pretrained visual representations (PVR) to improve generalization, these approaches have not outperformed representations learned from scratch in out-of-distribution (OOD) settings. In this work, we propose to incorporate object-centric representations, which have demonstrated strong OOD generalization capabilities by learning compositional representations, into model-based RL with PVR. We investigate whether this object-centric inductive bias improves both sample efficiency and task performance across in-distribution and OOD environments.