As generative AI becomes more thoroughly integrated into many human activities, we may face a systematic credit-blame asymmetry: We may not deserve full credit or praise for the valuable outputs we create with generative AI (i.e., if we do not put in sufficient skill or effort), but we may be entirely blameworthy for harmful outputs (e.g., due to negligence or recklessness). How might patterns of praise and blame change, however, if we use personalised AI that is trained on our own past outputs, created solely by us? In this talk I present recent theory and evidence on this question with data from the US, UK, China, and Singapore.