In some cases, ethical questions about the use of AI systems can be addressed without much reflection on what kinds of entities those systems are; instead, we need to know things like what the systems can do and how reliable they are. In other cases, however, it matters what kind of thing we are dealing with. For example, the problem of the ‘responsibility gap’ is said to exist partly because AI systems are not the kinds of things which can be morally responsible for their behaviour. One of the fundamental issues in this area is what it would take for AI systems to be agents. I will present an account of minimal agency in AI, building on the premise that agents pursue goals through interaction with environments. To understand agency, we need to distinguish activity which constitutes the pursuit of a goal from activity which merely constitutes the performance of a function.