Predictive and interpretable: using artificial neural networks and classic cognitive models to understand human learning and decision making
Quantitative models of behavior are a fundamental tool in cognitive science. Typically, models are hand-crafted to implement specific cognitive mechanisms. Such “classic” models are interpretable by design, but may provide poor fit to experimental data. Artificial neural networks (ANNs), on the contrary, can fit arbitrary datasets at the cost of opaque mechanisms. I will present research in the classic modeling tradition that illuminates the development of learning during childhood and the teen years. We have also used classical methods to understand hierarchical learning and abstraction. I will then show the limitations of classic modeling, and introduce a new ‘hybrid’ method that combines the predictive power of ANNs with the interpretability of classic models. Specifically, we replace the components of an RL model with ANNs, testing RL’s implicit assumptions one-by-one against human behavior. We find that hybrid models provide similar fit to fully-general ANNs, while retaining the interpretability of classic cognitive models: They reveal reward-based learning mechanisms in humans that are strikingly similar to classic RL. They also reveal mechanisms not contained in classic models, including separate reward-blind mechanisms, and the specific memory contents relevant to reward-based and reward-blind mechanisms.
Date: 13 June 2024, 14:30 (Thursday, 8th week, Trinity 2024)
Venue: Sherrington Building, off Parks Road OX1 3PT
Venue Details: Blakemore Lecture Theatre
Speaker: Maria Eckstein (Deepmind)
Organising department: Medical Sciences Division
Organiser: Dr Rui Ponte Costa (University of Oxford)
Part of: Oxford Neurotheory Forum
Booking required?: Not required
Audience: Members of the University only
Editor: Rui Costa