Representation and Pre-Activation of Lexical-Semantic Knowledge in Neural Language Models

Research output: Chapter in Book/Report/Conference proceedingConference contribution

29 Downloads (Pure)

Abstract

Neural network language models have the ability to capture the contextualised meanings of words in a sentence by dynamically evolving a representation of the linguistic input in a manner evocative of human language comprehension. While researchers have been able to analyse whether key linguistic regularities are adequately characterised by these evolving representations, determining whether they activate lexico-semantic knowledge similarly to humans remains challenging. In this paper, we perform a systematic analysis of how closely the intermediate layers from LSTM and transformer language models correspond to human semantic knowledge. Furthermore, in order to make more meaningful comparisons with theories of human language comprehension in psycholinguistics, we focus on two key stages where the meaning of a particular target word may arise: immediately before the word's presentation to the model (comparable to forward inferencing), and immediately after the word token has been input into the network. Our results indicate that the transformer models are better at capturing semantic knowledge relating to lexical concepts, both during word prediction and when retention is required.

Original languageEnglish
Title of host publicationCMCL 2021 - Workshop on Cognitive Modeling and Computational Linguistics, Proceedings
EditorsEmmanuele Chersoni, Nora Hollenstein, Cassandra Jacobs, Yohei Oseki, Laurent Prevot, Enrico Santus
PublisherAssociation for Computational Linguistics (ACL)
Pages211-221
Number of pages11
ISBN (Electronic)9781954085350
Publication statusPublished - 01 Jun 2021
Event11th Workshop on Cognitive Modeling and Computational Linguistics, CMCL 2021 - Virtual, Online
Duration: 10 Jun 2021 → …

Publication series

NameCMCL 2021 - Workshop on Cognitive Modeling and Computational Linguistics, Proceedings

Conference

Conference11th Workshop on Cognitive Modeling and Computational Linguistics, CMCL 2021
CityVirtual, Online
Period10/06/2021 → …

Bibliographical note

Publisher Copyright:
©2021 Association for Computational Linguistics.

Keywords

  • Transformers
  • computational semantics
  • cognition
  • neural language models
  • LSTM

ASJC Scopus subject areas

  • Language and Linguistics
  • Speech and Hearing

Fingerprint

Dive into the research topics of 'Representation and Pre-Activation of Lexical-Semantic Knowledge in Neural Language Models'. Together they form a unique fingerprint.

Cite this