The increasing ease of access to large participant populations via online recruitment platforms offers considerable promise to study aspects of emotion and participants’ relationship to brands, artistic media and advertising content. Large participant samples can be quickly and easily gathered, and their facial expressions can be assessed using video captured by webcams that are ubiquitous in modern computing environments. However, a long-standing issue is the degree of reliability of the data provided by automatic facial expression recognition systems. We evaluated the facial expressions of 836 participants triggered by an online sad video. Thanks to the use of this sizeable sample, this experiment has a high statistical power and showed, as expected, that Sadness expressions are significantly more frequently recognized compared to other facial expressions (i.e. Happiness, Surprise and Disgust). However, by using Generalized Additive Mixed Models (GAMM) to analyse the time-series we show that dynamic statistical analyses can avoid biased interpretations. Technical sensitivity, posture interpretation, emotion idiosyncrasy, intensity of spontaneous expressions and social context of the emotion elicitation are open questions that must be answered to perform reliable online facial expression recognition.
|Title of host publication||Symposium on Computational Modelling of Emotion: Theory and Applications|
|Number of pages||6|
|Publication status||Published - 18 Apr 2017|