Towards 4D coupled models of conversational facial expression interactions

Jason Vandeventer, Lukas Gräser, Magdalena Rychlowska, Paul L. Rosin, David Marshall

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper we introduce a novel approach for building 4D coupled statistical models of conversational facial expression interactions. To build these coupled models we use 3D AAMs for feature extraction, 4D polynomial fitting for sequence representation, and concatenated feature vectors of frontchannel-backchannel interactions (with offset values) for the coupled model.
Using a coupled model of conversation smile interactions, we predicted each sequence’s backchannel signal. In a subsequent experiment, human observers rated predicted sequences as highly similar to the originals. Our results demonstrate the usefulness of coupled models as powerful tools to analyse and synthesise key aspects of conversational interactions, including conversation timings, backchannel responses to frontchannel signals, and the spatial and temporal dynamics of conversational facial expression interactions.
Original languageEnglish
Title of host publicationProceedings of the British Machine Vision Conference
Publication statusPublished - 2015

Fingerprint

Dive into the research topics of 'Towards 4D coupled models of conversational facial expression interactions'. Together they form a unique fingerprint.

Cite this