In computational linguistics, the distributional hypothesis of word meaning has allowed us to construct distributional semantic models by mining large corpora of text to extract co-occurrence statistics of words. Neural approaches such as word2vec learn two sets of matrix representations from mined word-context counts, by using dot product between target and context vector with a sigmoid function to measure the probability of positive association. By using negative sampling techniques and gradient descent optimization, we can learn an approximation of word meaning. Our goal is to construct vector representations for a set of human-derived properties by using a neural topology, similar to the skip-gram word2vec, which uses the target word to predict the surrounding windowed context from mined statistics. Surveying a number of concepts for human interpretable features is costly and time-consuming, but unsupervised learning of vector space models from text data is cheap and accessible. We learn feature meaning by sampling a tiny subset of a pretrained set of word embeddings for which we know the properties. Negative sampling along with gradient descent applied only to the matrix of representations allows us to learn feature meaning in relation to the pretrained word vectors. In this case, a word and a feature have meaningful association if their vectors are close together, which we can measure using cosine similarity. Ranking these features for a given concept, we can extract salient features for the word. Furthermore, since these come from a wider vector space model, we can sample unseen words for features. The process allows us to extract possible feature of words, which could make further surveying the concepts for properties much faster. Active learning would then allow us to repeat this process with a larger lexicon which could be then surveyed again, this time with a higher probability of correctly sampling features.
|Number of pages||1|
|Publication status||Published - 02 Sep 2019|
|Event||International Conference of the Royal Statistical Society (RSS 2019) - Belfast, United Kingdom|
Duration: 02 Sep 2019 → 05 Sep 2019
|Conference||International Conference of the Royal Statistical Society (RSS 2019)|
|Abbreviated title||RSS 2019|
|Period||02/09/2019 → 05/09/2019|
Derby, S., Miller, P., & Devereux, B. (2019). Feature2Vec: Distributional Semantic Modelling of Human Property Knowledge. Abstract from International Conference of the Royal Statistical Society (RSS 2019), Belfast, United Kingdom.