Animals learn to adapt to levels of uncertainty in the environment by monitoring errors and engaging control processes. Recently, deep networks have been proposed as theories of animal perception, cognition and learning, but there is theory that allows us to incorporate error monitoring or control into neural networks. Here, we asked whether it was possible to meta-train deep RL agents to adapt to the level of controllability of the environment. We found that this was only possible if we encouraged them to compute action prediction errors – error signals similar to those generated in mammalian medial PFC. APE-trained networks meta-learned policies in an “observe vs. bet” bandit task that closely resembled those of humans. We also show that biases in this error computation lead the network to display pathologies of control characteristic of psychological disorders, such as compulsivity and learned helplessness.