Improving Intrinsic Exploration with Language Abstractions ( Machine Learning Paper Explained)
, reinforcementlearning, ai, explained Exploration is one of the oldest challenges for Reinforcement Learning algorithms, with no clear solution to date. Especially in environments with sparse rewards, agents face significant challenges in deciding which parts of the environment to explore further. Providing intrinsic motivation in form of a pseudoreward is sometimes used to overcome this challenge, but often relies on handcrafted heuristics, and can lead to deceptive deadends. This paper proposes to use language descriptions of encountered states as a method of assessing novelty. In two procedurally generated environments, they demonstrate the usefulness of language, which is in itself highly concise and abstractive, which lends itself well for this task. OUTLINE: 0:00 Intro 1:10 Paper Overview: Language for exploration 5:40 The MiniGrid MiniHack environments 7:00 Annotating states with language 9:05 Baseline algorithm: AMIGo 12:20 Adding language to AMIGo 22:55 Baseline algorithm: NovelD
|
|