A simple neural network to structure thoughts using rapidly changing synapses
A central organizing principle of human mental life is grammar. It allows us to separate the structure from the content of thought. Few models explain how grammar may be implemented in neurons. We combined two rapid Hebbian synaptic plasticity rules to demonstrate how neurons can implement simple grammar. The first rule associates neurons representing words with neurons representing syntactic roles, e.g. “dog” may associate with “subject” or “object”. The second rule establishes the sequential ordering of roles (e.g. subject → verb → object), guided by predefined syntactic knowledge. We find that, like humans, the network encodes and retrieves grammatical sentences better than shuffled word-lists. It can serialize a ‘bag of words’ to express an idea as a sentence. The network can model languages that rely on syntactic order, but also order-free morphemic languages. The model predicts the existence of syntactic and lexical priming, and can simulate evoked potentials recorded from EEG. When lesioned, the network exhibits classical symptoms of neurological aphasia, including dissociation between agrammatic and semantic aphasia, unlike current deep neural network language models. Crucially, it achieves all this using an intuitive representation where words fill roles, emulating structured cognition.
Date:
30 November 2023, 14:30 (Thursday, 8th week, Michaelmas 2023)
Venue:
Sherrington Library, off Parks Road OX1 3PT
Speaker:
Sanjay Manohar (University of Oxford)
Organising department:
Medical Sciences Division
Organiser:
Dr Rui Ponte Costa (University of Oxford)
Part of:
Oxford Neurotheory Forum
Booking required?:
Not required
Audience:
Members of the University only
Editor:
Rui Costa