Özet
In the task of talking face generation, the objective is to generate a face video with lips synchronized to the corresponding audio while preserving visual details and identity information. Current methods face the challenge of learning accurate lip synchronization while avoiding detrimental effects on visual quality, as well as robustly evaluating such synchronization. To tackle these problems, we propose utilizing an audio-visual speech representation expert (AV-HuBERT) for calculating lip synchronization loss during training. Moreover, leveraging AV-HuBERT's features, we introduce three novel lip synchronization evaluation metrics, aiming to provide a comprehensive assessment of lip synchronization performance. Experimental results, along with a detailed ablation study, demonstrate the effectiveness of our approach and the utility of the proposed evaluation metrics.
Orijinal dil | İngilizce |
---|---|
Ana bilgisayar yayını başlığı | Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024 |
Yayınlayan | IEEE Computer Society |
Sayfalar | 6003-6013 |
Sayfa sayısı | 11 |
ISBN (Elektronik) | 9798350365474 |
DOI'lar | |
Yayın durumu | Yayınlandı - 2024 |
Etkinlik | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024 - Seattle, United States Süre: 16 Haz 2024 → 22 Haz 2024 |
Yayın serisi
Adı | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
---|---|
ISSN (Basılı) | 2160-7508 |
ISSN (Elektronik) | 2160-7516 |
???event.eventtypes.event.conference???
???event.eventtypes.event.conference??? | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024 |
---|---|
Ülke/Bölge | United States |
Şehir | Seattle |
Periyot | 16/06/24 → 22/06/24 |
Bibliyografik not
Publisher Copyright:© 2024 IEEE.