Performance in language modelling has been significantly improved by training recurrent neural networks on large corpora. This progress has come at the cost of interpretability and an understanding of how these architectures function, making principled development of better language models more difficult. We look inside a state-of-the-art neural language model to analyse how this model represents high-level lexico-semantic information. In particular, we investigate how the model represents words by extracting activation patterns where they occur in the text, and compare these representations directly to human semantic knowledge.
|Number of pages||3|
|Publication status||Published - 01 Nov 2018|
|Event||BlackBoxNLP 2018: Workshop on analyzing and interpreting neural networks for NLP - EMNLP 2018, Brussels, Belgium|
Duration: 31 Oct 2018 → 01 Nov 2018
|Workshop||BlackBoxNLP 2018: Workshop on analyzing and interpreting neural networks for NLP|
|Abbreviated title||BlackBoxNLP 2018|
|Period||31/10/2018 → 01/11/2018|