Cases such as Cambridge Analytica or the use of AI by the Chinese government suggest that the use of artificial intelligence (AI) creates some risks for democracy. This paper analyzes these risks by using the concept of epistemic agency and argues that the use of AI risks to influence the formation and the revision of beliefs in at least three ways: the direct, intended manipulation of beliefs, the type of knowledge offered, and the creation and maintenance of epistemic bubbles. It then suggests some implications for research and policy.
HYBRID – PLEASE JOIN IN-PERSON OR ONLINE (links on webpage)