Abstract
Depression is considered a serious medical condition and a large number of people around the world are suffering from it. Within this context, a lot of studies have been proposed to estimate the degree of depression based on different features and modalities, specific to depression. Supported by medical studies that show how depression is a disorder of impaired emotion regulation, we propose a different approach, which relies on the rationale that the estimation of depression level can benefit from the concurrent learning of emotion intensity. To test this hypothesis, we design different attention-based multi-task architectures that concurrently regress/classify both depression level and emotion intensity using text data. Experiments based on two benchmark datasets, namely, the Distress Analysis Interview Corpus-a Wizard of Oz (DAIC-WOZ), and the CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) show that substantial performance improvements can be achieved when compared to emotion-unaware single-task and multitask approaches.
Original language | English |
---|---|
Pages (from-to) | 47-59 |
Number of pages | 13 |
Journal | IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE |
Volume | 15 |
Issue number | 3 |
Early online date | 15 Jul 2020 |
DOIs | |
Publication status | Published - Aug 2020 |
Externally published | Yes |
Bibliographical note
Funding Information:Dr. Sriparna Saha gratefully acknowledges the Young Faculty Research Fellowship (YFRF) Award, supported by Visvesvaraya PhD scheme for Electronics and IT, Ministry of Electronics and Information Technology (MeitY), Government of India, being implemented by Digital India Corporation (formerly Media Lab Asia) for carrying out this research.
Publisher Copyright:
© 2005-2012 IEEE.
ASJC Scopus subject areas
- Theoretical Computer Science
- Artificial Intelligence