MentalLLaMA: Interpretable Mental Health Analysis on Social Media with Large Language Models
This is a virtual seminar. For a Zoom link, please see "Venue details". Please consider subscribing to mailing list: web.maillist.ox.ac.uk/ox/subscribe/ai4mch
With the development of web technology and social media plat- forms, social media texts are becoming a rich source for automatic mental health analysis. As traditional discriminative methods bear the problem of low interpretability, the recent large language models (LLMs) have been explored for interpretable mental health analysis on social media, which aims to provide detailed explanations along with predictions. The results show that ChatGPT can generate approaching-human explanations for its correct classifications. Domain-specific finetuning is an effective solution, but faces 2 challenges: 1) lack of high- quality training data. 2) no open-source LLMs for interpretable mental health analysis were released to lower the finetuning cost. To alleviate these problems, we build the first multi-task and multi- source interpretable mental health instruction (IMHI) dataset on social media, with 105K data samples to support LLM instruction tuning. The raw social media data are collected from 10 existing sources covering 8 mental health analysis tasks. We use expert- written few-shot prompts and collected labels to prompt ChatGPT and obtain explanations from its responses. To ensure the reliability of the explanations, we perform strict automatic and human evaluations on the correctness, consistency, and quality of generated data. Based on the IMHI dataset and LLaMA2 foundation models, we train MentalLLaMA, the first open-source LLM series for interpretable mental health analysis with instruction-following capability. We also evaluate the performance of MentalLLaMA on the IMHI evaluation benchmark with 10 test sets, where their cor- rectness for making predictions and the quality of explanations are examined. The results show that MentalLLaMA approaches state-of-the-art discriminative methods in correctness and generates high-quality explanations.
Date:
30 January 2024, 15:00 (Tuesday, 3rd week, Hilary 2024)
Venue:
https://zoom.us/j/92981799893?pwd=K1BxWndKUWNhWGhuSEVEOGppdC8vdz09
Speaker:
Professor Sophia Ananiadou (The University of Manchester)
Organising department:
Department of Psychiatry
Organiser contact email address:
andrey.kormilitzin@psych.ox.ac.uk
Host:
Dr Andrey Kormilitzin (University of Oxford)
Part of:
Artificial Intelligence for Mental Health Seminar Series
Booking required?:
Not required
Booking url:
https://web.maillist.ox.ac.uk/ox/info/ai4mch
Booking email:
andrey.kormilitzin@psych.ox.ac.uk
Audience:
Public
Editor:
Andrey Kormilitzin