The Algorithmic Learning Equations: Evolving Strategies in Dynamics Games
We introduce the algorithmic learning equations, a set of ordinary differential equations which characterizes the finite-time and asymptotic behavior of the stochastic interaction between state-dependent learning algorithms in dynamic games. Our framework allows for a variety of information and memory structures, including noisy, perfect, private, and public monitoring and for the possibility that players use distinct learning algorithms. We prove that play converges to a correlated equilibrium for a family of algorithms under correlated private signals. Finally, we apply our methodology in a repeated 2×2 prisoner’s dilemma game with perfect monitoring. We show that algorithms can learn a reward-punishment mechanism to sustain tacit collusion. Additionally, we find that algorithms can also learn to coordinate in cycles of cooperation and defection.
Date:
28 October 2022, 12:45 (Friday, 3rd week, Michaelmas 2022)
Venue:
Manor Road Building, Manor Road OX1 3UQ
Venue Details:
Seminar Room G
Speaker:
Patrick Chang (University of Oxford)
Organising department:
Department of Economics
Part of:
Student Research Workshop in Micro Theory
Booking required?:
Not required
Audience:
Members of the University only
Editors:
Melis Clark,
Emma Lane