Using sparse semantic embeddings learned from multimodal text and image data to model human conceptual knowledge

Steven Derby, Paul Miller, Brian Murphy, Barry Devereux

Research output: Chapter in Book/Report/Conference proceedingConference contribution

7 Citations (Scopus)
924 Downloads (Pure)

Abstract

Distributional models provide a convenient way of modelling semantics using dense embedding spaces derived from unsupervised learning algorithms. However, the dimensions of dense embedding spaces are not designed to resemble human semantic knowledge. Moreover, embeddings are often built from a single source of information (typically text data), even though neurocognitive research suggests that semantics is deeply linked to both language and perception. In this paper, we combine multi-modal information from both text and image-based representations derived from state-of-the-art distributional models to produce sparse, interpretable vectors using Joint Non-Negative Sparse Embedding. Through in-depth analyses comparing these sparse models to human-derived behavioural and neuroimaging data, we demonstrate their ability to predict interpretable linguistic descriptions of human groundtruth semantic knowledge.
Original languageEnglish
Title of host publicationProceedings of the Conference on Computational Natural Language Learning (CoNLL 2018)
Pages260-270
Number of pages11
Publication statusPublished - 31 Oct 2018
EventCoNLL 2018: The SIGNLL Conference on Computational Natural Language Learning - Brussels, Belgium
Duration: 31 Oct 201801 Nov 2018
http://www.conll.org/2018

Conference

ConferenceCoNLL 2018: The SIGNLL Conference on Computational Natural Language Learning
Abbreviated titleCoNLL 2018
Country/TerritoryBelgium
CityBrussels
Period31/10/201801/11/2018
Internet address

Fingerprint

Dive into the research topics of 'Using sparse semantic embeddings learned from multimodal text and image data to model human conceptual knowledge'. Together they form a unique fingerprint.

Cite this