We develop a comprehensive framework to assess policy measures aimed at curbing false news dissemination on social media. A randomized experiment on Twitter during the 2022 U.S. mid-term elections evaluates such policies as priming the awareness of misinformation, fact-checking, confirmation clicks, and prompting careful consideration of content. Priming is the most effective policy in reducing sharing of false news while increasing sharing of true content. A model of sharing decisions, motivated by persuasion, partisan signalling, and reputation concerns, predicts that policies affect sharing through three channels: (i) updating perceived veracity and partisanship of content, (ii) raising the salience of reputation, and (iii) increasing sharing frictions. Structural estimation shows that all policies impact sharing via the salience of reputation and cost of friction. Affecting perceived veracity plays a negligible role as a mechanism in all policies, including fact-checking. The priming intervention performs best in enhancing reputation salience with minimal added friction.