Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model

Research output: Contribution to conferencePaperpeer-review

108 Downloads (Pure)

Abstract

Performance in language modelling has been significantly improved by training recurrent neural networks on large corpora. This progress has come at the cost of interpretability and an understanding of how these architectures function, making principled development of better language models more difficult. We look inside a state-of-the-art neural language model to analyse how this model represents high-level lexico-semantic information. In particular, we investigate how the model represents words by extracting activation patterns where they occur in the text, and compare these representations directly to human semantic knowledge.
Original languageEnglish
Pages362-364
Number of pages3
DOIs
Publication statusPublished - 01 Nov 2018
EventBlackBoxNLP 2018: Workshop on analyzing and interpreting neural networks for NLP - EMNLP 2018, Brussels, Belgium
Duration: 31 Oct 201801 Nov 2018
https://blackboxnlp.github.io/

Workshop

WorkshopBlackBoxNLP 2018: Workshop on analyzing and interpreting neural networks for NLP
Abbreviated titleBlackBoxNLP 2018
CountryBelgium
CityBrussels
Period31/10/201801/11/2018
Internet address

Fingerprint Dive into the research topics of 'Representation of Word Meaning in the Intermediate Projection Layer of a Neural Language Model'. Together they form a unique fingerprint.

Cite this