The internet is becoming an increasingly dominant feature of social life in the western world. More and more users rely on platforms such as Facebook, Twitter and Google to receive news, communicate with others, share content and conduct everyday tasks. As this reliance grows, it is important to ask questions about how we ensure safety and fairness on the internet. For instance, how can we limit the spread of harmful content such as rumour and hate speech on social media? How can we ensure that the algorithms that filter much of the content we see produce results that are both accurate and unbiased? What can we do to protect vulnerable users online?
In this talk we describe two projects that seek to advance safety and fairness online. We report on the findings of the Digital Wildfire project, which investigated opportunities for the responsible governance of social media – in particular looking into how we might prevent and limit the spread of hate speech and rumour online whilst also protecting freedom of speech. We also introduce the UnBias project, which investigates the user experience of algorithm-driven internet services and the processes of algorithm design. This project focuses in particular on the perspectives of young people and involves activities that will 1) support user understanding about online environments, 2) raise awareness among online providers about the concerns and rights of internet users, and 3) generate debate about the ‘fair’ operation of algorithms in modern life.
Helena Webb is a Senior Researcher in the Department of Computer Science at the University of Oxford. She works as part of the Human Centred Computing Group, which examines the inter-relationships between computing and social practices. She is interested in communication, organisation and the use of technology in everyday work and life. Most recently she has been working on the Digital Wildfire and UnBias projects.
Menisha Patel is a researcher working within the Department of Computer Science at the University of Oxford. She is part of the Human Centred Computing Group, and her work focuses around both fine-grained and more systemic level social issues surrounding the design, development and integration of technologies into our world. She is interested in how we can use micro-level approaches, informed by ethnomethodology and conversation analysis, to understand and assess the design and use of technologies both in the workplace and also in prototype form. Her recent work has been within the field of “responsible research and innovation” (RRI), where she has worked on projects concerning how we can integrate more responsible practice into the research and innovation procedures and processes, to engender more socially and ethically desirable innovation.