What have large language models actually learned about language(s)? lessons from linguistics and mechanistic interpretability


This is a hybrid event. Please find the Teams link in the abstract.

Biography:

Dominik Lukeš is a Lead Business Technologist at the AI and ML Competency Centre with a focus on digital scholarship and academic practice. Prior to joining the Centre he started the Reading and Writing Innovation Lab where he focused on technologies supporting reading and writing in academic contexts.

Dominik’s research focus is in linguistics and language pedagogy. He has previously run workshops at Oxford on using corpus analysis tools for humanities research. Dominik’s core area of expertise is an intersection of conceptual metaphor theory and discourse analysis with a particular focus on construction grammar. He was the founding member of the journal Cognitive Approaches to Critical Discourse Analysis (CADAAD) and co-edited with Chris Hart the 2007 volume Cognitive Linguistics in Critical Discourse Analysis. He also translated George Lakoff’s Women, Fire and Dangerous Things into Czech. He is the author of Czech Navigator, a grammar of Czech for non-native speakers.

He blogs on MetaphorHacker.net and maintains a site focusing on exploring Large Language Models as Semantic Machines and publishes an occasional newsletter on AI in Academic Practice.

Teams link: teams.microsoft.com/l/meetup-join/19%3ameeting_YTdlYTk5ZmMtZGUxMy00YjlkLTg4MWQtOWM4MWNjNzlmMWJm%40thread.v2/0?context=%7b%22Tid%22%3a%22cc95de1b-97f5-4f93-b4ba-fe68b852cf91%22%2c%22Oid%22%3a%22e230fa69-f5f7-4a56-935e-7c0582e568dd%22%7d

Abstract:

This talk will explore what exactly large language models (LLMs) have learned about language, and why it matters. It will attempt an outline of an answer to how these models understand the structure of language, and how this compares to how humans think about language. To answer the question, it will look at how LLMs perform across a range of different languages, even those with smaller digital footprints, and what this implies about how they represent language. It will contrast the results from this investigation with the latest findings from the field of “mechanistic interpretability.” These findings offer insights into the inner workings of LLMs, offering clues about the fundamental differences between how we understand language and how language is represented inside the models. Finally, it will suggest a need a new approach to LLM research that brings together a richer understanding of language and a systematic investigation into how LLMs perform across a variety of languages.