In this paper, we present a new approach to visual speech recognition which improves contextual modelling by combining Inter-Frame Dependent and Hidden Markov Models. This approach captures contextual information in visual speech that may be lost using a Hidden Markov Model alone. We apply contextual modelling to a large speaker independent isolated digit recognition task, and compare our approach to two commonly adopted feature based techniques for incorporating speech dynamics. Results are presented from baseline feature based systems and the combined modelling technique. We illustrate that both of these techniques achieve similar levels of performance when used independently. However significant improvements in performance can be achieved through a combination of the two. In particular we report an improvement in excess of 17% relative Word Error Rate in comparison to our best baseline system.
|Number of pages||4|
|Publication status||Published - Sep 2010|
|Event||International Conference on Image Processing - Hong Kong, Hong Kong|
Duration: 26 Sep 2010 → 29 Sep 2010
|Conference||International Conference on Image Processing|
|Period||26/09/2010 → 29/09/2010|