Experimental evidence suggests that familiar items are represented by larger hippocampal neuronal assemblies than less familiar ones. Memory storage and recall in the hippocampus have been modelled using attractor neural networks, whose design poses stability challenges when dynamic learning rules are implemented. In this talk I will describe a computational modelling approach, based on a dynamic attractor network model, that we used to show how hippocampal neural assemblies can evolve differently, depending on the frequency of presentation of the stimuli (Boscaglia et al., 2023). I will illustrate the design choices that we made in order to have dynamic memory representations and the behaviour of the model in different experimental paradigms involving memory formation, reinforcement and forgetting. It will be discussed how our computational results align with findings from single-cell recordings in the human hippocampus, making this model suitable to explore other memory coding mechanisms.