I’ll talk about two loosely related ideas. First, I’ll describe a simple way of learning a smart (prior-dependent) reinforcement learning algorithm using recurrent networks, which we call meta-RL. Second, I’ll talk about experimental work in MEG where we found spontaneous reactivation of sequences of states in a non-spatial task. These things are related insomuch as meta-RL depends on incremental learning from a set of different tasks and needs experience to be randomized, which spontaneous reactivation could provide. More broadly, there’s a lot to learn from the relationships between all your past experiences.