TY - GEN
T1 - Does History Matter? Using Narrative Context to Predict the Trajectory of Sentence Sentiment
AU - Watson, Liam
AU - Jurek-Loughrey, Anna
AU - Devereux, Barry
AU - Murphy, Brian
PY - 2020/5
Y1 - 2020/5
N2 - While there is a rich literature on the tracking of sentiment and emotion in texts, modelling the emotional trajectory of longer narratives, such as literary texts, poses new challenges. Previous work in the area of sentiment analysis has focused on using information from within a sentence to predict a valence value for that sentence. We propose to explore the influence of previous sentences on the sentiment of a given sentence. In particular, we investigate whether information present in a history of previous sentences can be used to predict a valence value for the following sentence. We explored both linear and non-linear models applied with a range of different feature combinations. We also looked at different context history sizes to determine what range of previous sentence context was the most informative for our models. We establish a linear relationship between sentence context history and the valence value of the current sentence and demonstrate that sentences in closer proximity to the target sentence are more informative. We show that the inclusion of semantic word embeddings further enriches our model predictions.
AB - While there is a rich literature on the tracking of sentiment and emotion in texts, modelling the emotional trajectory of longer narratives, such as literary texts, poses new challenges. Previous work in the area of sentiment analysis has focused on using information from within a sentence to predict a valence value for that sentence. We propose to explore the influence of previous sentences on the sentiment of a given sentence. In particular, we investigate whether information present in a history of previous sentences can be used to predict a valence value for the following sentence. We explored both linear and non-linear models applied with a range of different feature combinations. We also looked at different context history sizes to determine what range of previous sentence context was the most informative for our models. We establish a linear relationship between sentence context history and the valence value of the current sentence and demonstrate that sentences in closer proximity to the target sentence are more informative. We show that the inclusion of semantic word embeddings further enriches our model predictions.
UR - https://www.aclweb.org/anthology/2020.lincr-1.5
UR - https://www.mendeley.com/catalogue/13ffd738-7cc7-396c-8ce5-1bd32bdd6507/
M3 - Conference contribution
SN - 979-10-95546-52-8
T3 - Proceedings of the Second Workshop on Linguistic and Neurocognitive Resources
SP - 38
EP - 42
BT - Proceedings of the Workshop on Linguistic and Neurocognitive Resources
PB - European Language Resources Association
ER -