Abstract
In this paper, we present a three-stage approach, which creates realistic facial animations by tracking expressions of a human face in 2D and transferring them to a human-like 3D model in real-time. Our calibration-free method, which is based on an average human face, does not require training. The tracking is performed using a single camera to enable several practical applications, for example, using tablets and mobile devices, and the expressions are transferred with a joint-based system to improve the quality and persuasiveness of animations. In the first step of the method, a joint-based facial rig providing mobility to pseudo-muscles is attached to the 3D model. The second stage covers the tracking of 2D positions of the facial landmarks from a single camera view and transfer of 3D relative movement data to move the respective joints on the model. The last step includes the recording of animation using a partially automated key-framing technique. Experiments on the extended Cohn-Kanade dataset using peak frames in frontal-view videos have shown that the presented method produces visually satisfying facial animations.
Original language | English |
---|---|
Pages (from-to) | 12519-12535 |
Number of pages | 17 |
Journal | Multimedia Tools and Applications |
Volume | 78 |
Issue number | 9 |
DOIs | |
Publication status | Published - 1 May 2019 |
Bibliographical note
Publisher Copyright:© 2018, Springer Science+Business Media, LLC, part of Springer Nature.
Funding
Acknowledgements This work was supported by the TÜB˙TAK project 113E067 and the EU Seventh Framework Programme Marie Curie FP7 integration project.
Funders | Funder number |
---|---|
Seventh Framework Programme |
Keywords
- Expression transfer
- Facial animation
- Facial tracking
- Performance-driven animation