People supported by AI-powered decision support tools frequently overrely on the AI: they accept an AI’s suggestion even when that suggestion is wrong. Adding explanations to the AI suggestions does not appear to reduce the overreliance and some studies suggest that it might even increase it. Our research suggests that human cognitive motivation moderates the effectiveness of decision support tools powered by explainable AI. Specifically, even in high-stakes domains, people rarely engage analytically with each individual AI recommendation and explanation, and instead appear to develop general heuristics about whether and when to follow the AI suggestions. We show that interventions applied at the decision-making time to disrupt heuristic reasoning can increase people’s cognitive engagement with the AI’s output and consequently reduce (but not entirely eliminate) human overreliance on the AI. Our research also points to two shortcomings in how our research community is pursuing the explainable AI research agenda. First, the commonly-used evaluation methods rely on proxy tasks that artificially focus people’s attention on the AI models leading to misleading (overly optimistic) results. Second, by insufficiently examining the sociotechnical contexts, we may be solving problems that are technically the most obvious but that are not the most valuable to the key stakeholders.