Abstract
Automatic visual speech recognition is an interesting problem in pattern recognition especially when audio data is noisy or not readily available. It is also a very challenging task mainly because of the lower amount of information in the visual articulations compared to the audible utterance. In this work, principle component analysis is applied to the image patches — extracted from the video data — to learn the weights of a two-stage convolutional network. Block histograms are then extracted as the unsupervised learning features. These features are employed to learn a recurrent neural network with a set of long short-term memory cells to obtain spatiotemporal features. Finally, the obtained features are used in a tandem GMM-HMM system for speech recognition. Our results show that the proposed method has outperformed the baseline techniques applied to the OuluVS2 audiovisual database for phrase recognition with the frontal view cross-validation and testing sentence correctness reaching 79% and 73%, respectively, as compared to the baseline of 74% on cross-validation.
Original language | English |
---|---|
Title of host publication | Computer Vision - ACCV 2016 Workshops, ACCV 2016 International Workshops, Revised Selected Papers |
Editors | Kai-Kuang Ma, Jiwen Lu, Chu-Song Chen |
Publisher | Springer Verlag |
Pages | 264-276 |
Number of pages | 13 |
ISBN (Print) | 9783319544267 |
DOIs | |
Publication status | Published - 2017 |
Event | 13th Asian Conference on Computer Vision, ACCV 2016 - Taipei, Taiwan, Province of China Duration: 20 Nov 2016 → 24 Nov 2016 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 10117 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 13th Asian Conference on Computer Vision, ACCV 2016 |
---|---|
Country/Territory | Taiwan, Province of China |
City | Taipei |
Period | 20/11/16 → 24/11/16 |
Bibliographical note
Publisher Copyright:© Springer International Publishing AG 2017.