Abstract
In this paper, we introduce an application of matrix factorization to produce corpus-derived, distributional
models of semantics that demonstrate cognitive plausibility. We find that word representations
learned by Non-Negative Sparse Embedding (NNSE), a variant of matrix factorization, are sparse,
effective, and highly interpretable. To the best of our knowledge, this is the first approach which
yields semantic representation of words satisfying these three desirable properties. Though extensive
experimental evaluations on multiple real-world tasks and datasets, we demonstrate the superiority
of semantic models learned by NNSE over other state-of-the-art baselines.
models of semantics that demonstrate cognitive plausibility. We find that word representations
learned by Non-Negative Sparse Embedding (NNSE), a variant of matrix factorization, are sparse,
effective, and highly interpretable. To the best of our knowledge, this is the first approach which
yields semantic representation of words satisfying these three desirable properties. Though extensive
experimental evaluations on multiple real-world tasks and datasets, we demonstrate the superiority
of semantic models learned by NNSE over other state-of-the-art baselines.
| Original language | English |
|---|---|
| Title of host publication | International Conference on Computational Linguistics (COLING 2012), Mumbai, India |
| Publisher | Association for Computational Linguistics |
| Pages | 1933-1949 |
| Number of pages | 17 |
| Publication status | Published - Dec 2012 |
Keywords
- distributional semantics,inter-,neuro-semantics,pretability,sparse coding,vector-space models,word embeddings
Fingerprint
Dive into the research topics of 'Learning Effective and Interpretable Semantic Models using Non-Negative Sparse Embedding'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver