Recent years have seen a remarkable increase in the computational power we have at our disposal, which has been the driving force behind the emergence of large scale deep learning systems. Today, the field of computational linguistics is dominated by high capacity neural models that in many cases outperform even human baselines on a wide variety of language-based tasks. However, a serious drawback to the development of these semantic models is in the ambiguity of these representations, as the dimensions of these feature vectors are no longer characterised by clear, recognisable units of meaning. Even though these models have produced state-of-the-art performance on natural language tasks, it has come at the cost of interpretability of both the models and the word representations derived from them. Because of their lack of interpretability, it can be difficult for researchers to gain a deeper understanding of the types of knowledge that these semantic models actually represent, or make incremental improvements towards more structured representations. One approach researchers have used to overcome these problems is to directly consider our own language understanding. Humans' semantic knowledge contains a valuable account of lexical meaning, drawing from a broad range of linguistic and perceptual information. Promisingly, dense embedding models perform well on intrinsic evaluation tasks that indirectly compare semantic information in the models with human judgements and other human-derived data, which can help explain certain behavioural phenomena. The focus of this dissertation is towards gaining a deeper understanding of computational models that aim to learn the meaning of words by determining whether they capture similar grounded perceptual knowledge reflected in human conceptual meaning.
|Date of Award
- Queen's University Belfast
|Brian Murphy (Supervisor), Paul Miller (Supervisor) & Barry Devereux (Supervisor)