Abstract
In order to improve the social capabilities of embodied conversational agents, we propose a computational model to enable agents to automatically select and display appropriate smiling behavior during human-machine interaction. A smile may convey different communicative intentions depending on subtle charac- teristics of the facial expression and contextual cues. So, to construct such a model, as a first step, we explore the morphological and dynamic characteristics of different types of smile (polite, amused and embarrassed smiles) that an embodied conversational agent may display. The resulting lexicon of smiles is based on a corpus of virtual agent’s smiles directly created by users and analyzed through a machine learning technique. Moreover, during an interaction, the expression of smile impacts on the observer’s perception of the interpersonal stance of the speaker. As a second step, we propose a probabilistic model to automatically compute the user’s potential perception of the embodied conversational agent’s social stance depending on its smiling behavior and on its physical appearance. This model, based on a corpus of users’ perception of smiling and non-smiling virtual agents, enables a virtual agent to determine the appropriate smiling behavior to adopt given the interpersonal stance it wants to express. An experiment using real human-virtual agent interaction provided some validation of the proposed model.
Original language | English |
---|---|
Article number | 4 |
Number of pages | 34 |
Journal | ACM Transactions on Interactive Intelligent Systems (TiiS) |
Volume | 7 |
Issue number | 1 |
DOIs | |
Publication status | Published - 01 Mar 2017 |
Fingerprint
Dive into the research topics of 'A user-perception based approach to create smiling embodied conversational agents'. Together they form a unique fingerprint.Profiles
-
Gary McKeown
- School of Psychology - Senior Lecturer
- Intelligent Autonomous Manufacturing Systems
Person: Academic