Long-termism for risk averse altruists

According to Long-termism, altruistic agents should try to beneficially influence the long-run future, as opposed to aiming at short-term benefits. The likelihood that I can significantly impact the long-term future of humanity seems very small, whereas I can be reasonably confident of achieving significant short-term goods. However, the potential value of the far future is so enormous that even an act with only a tiny probability of preventing an existential catastrophe should apparently be assigned much higher expected value than an alternative that realizes some short-term benefit with near certainty. This talk will explore whether agents who are risk averse should be more or less willing to endorse Long-termism, looking in particular at agents who can be modelled as risk avoidant within the framework of Buchak’s risk-weighted expected utility.