In recent years, there has been growing interest in the reasoning abilities of Large Language Models (LLMs). However, there is relatively little discussion on how LLMs handle default reasoning patterns that have motivated various systems of non-monotonic logic. This talk examines the capabilities of 28 LLMs to reason with 13 reasoning patterns widely discussed in the non-monotonic logic literature, focusing on inference patterns involving generics (e.g., `Birds fly’, `Ravens are black’) that express default information. Generics are notable because they express quasi-universal generalisations that exhibit complex exception-permitting behaviour that cannot easily be described in set-theoretic or Bayesian terms. Generics are of special interest to philosophers, linguistics, logicians, and cognitive scientists, not least due to their connection with default reasoning but because of their centrality in cognition and concept acquisition. Assessing the capacity of LLMs for default reasoning is important for evaluating the extent to which the reasoning abilities of LLMs match those of humans more generally.