From 2D to 3D real-time expression transfer for facial animation

Beste Ekmen*, Hazım Kemal Ekenel

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

In this paper, we present a three-stage approach, which creates realistic facial animations by tracking expressions of a human face in 2D and transferring them to a human-like 3D model in real-time. Our calibration-free method, which is based on an average human face, does not require training. The tracking is performed using a single camera to enable several practical applications, for example, using tablets and mobile devices, and the expressions are transferred with a joint-based system to improve the quality and persuasiveness of animations. In the first step of the method, a joint-based facial rig providing mobility to pseudo-muscles is attached to the 3D model. The second stage covers the tracking of 2D positions of the facial landmarks from a single camera view and transfer of 3D relative movement data to move the respective joints on the model. The last step includes the recording of animation using a partially automated key-framing technique. Experiments on the extended Cohn-Kanade dataset using peak frames in frontal-view videos have shown that the presented method produces visually satisfying facial animations.

Original languageEnglish
Pages (from-to)12519-12535
Number of pages17
JournalMultimedia Tools and Applications
Volume78
Issue number9
DOIs
Publication statusPublished - 1 May 2019

Bibliographical note

Publisher Copyright:
© 2018, Springer Science+Business Media, LLC, part of Springer Nature.

Funding

Acknowledgements This work was supported by the TÜB˙TAK project 113E067 and the EU Seventh Framework Programme Marie Curie FP7 integration project.

FundersFunder number
Seventh Framework Programme

    Keywords

    • Expression transfer
    • Facial animation
    • Facial tracking
    • Performance-driven animation

    Fingerprint

    Dive into the research topics of 'From 2D to 3D real-time expression transfer for facial animation'. Together they form a unique fingerprint.

    Cite this