Abstract:
Decoding inner monologues from brain signals could be hugely impactful. It has the potential to significantly improve the lives of those who currently can’t communicate. In the past five years, there have been several notable breakthroughs in this area, using deep learning to decode brain signals directly into meaningful text through both surgically invasive and non-invasive methods. In this talk, I will discuss these breakthroughs and the current state-of-the-art in brain-speech decoding methods from the perspective of a deep learning researcher. I will also discuss the primary challenges we face in this field and our plans to overcome them. While the implications of successful decoding are vast, this field also navigates complex ethical considerations and the future of this work not only has the potential to improve brain-computer interfaces (BCIs) for communication, but also extends into many broader applications.
Bio:
Dulhan is a DPhil student in the Autonomous Intelligent Machines and Systems (AIMS) CDT. At PNPL, his work focuses on leveraging deep learning to find efficient representations of brain signals for downstream tasks (e.g. phoneme recognition from heard speech brain data). Prior to joining PNPL, he worked on multi-agent RL and reasoning with graph neural networks at the University of Cambridge. Before this, he completed his BSc in Computer Science at the University of Southampton, where he researched computer vision systems for visual navigation. He has worked on large language models at Speechmatics and developing assembly-level machine learning kernels for new hardware at Arm.