The ongoing global race for bigger and better artificial intelligence (AI) systems is expected to have a profound societal and environmental impact by altering job markets, disrupting business models, and enabling new governance and societal welfare structures that can affect global consensus for climate action pathways. However, the current AI systems are trained on biased datasets that could destabilize political agencies impacting climate change mitigation and adaptation decisions and compromise social stability, potentially leading to societal tipping events. Thus, the appropriate design of a less biased AI system that reflects both direct and indirect effects on societies and planetary challenges is a question of paramount importance. In this paper, we tackle the question of data-centric knowledge generation for climate action in ways that minimize biased AI. We argue for the need to co-align a less biased AI with an epistemic web on planetary health challenges for more trustworthy decision-making. A human- in-the-loop AI can be designed to align with three goals. First, it can contribute to a planetary epistemic web that supports climate action. Second, it can directly enable mitigation and adaptation interventions through knowledge of social tipping elements. Finally, it can reduce the data injustices associated with AI pretraining datasets.