3-D face recognition using local appearance-based models

Hazým Kemal Ekenel*, Gao Hua, Rainer Stiefelhagen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

12 Citations (Scopus)


In this paper, we present a local appearance-based approach for 3-D face recognition. In the proposed algorithm, we first register the 3-D point clouds to provide a dense correspondence between faces. Afterwards, we analyze two mapping techniques-the closest-point mapping and the ray-casting mapping, to construct depth images from the corresponding well-registered point clouds. The depth images that are obtained are then divided into local regions where the discrete cosine transformation is performed to extract local information. The local features are combined at the feature level for classification. Experimental results on the FRGC version 2.0 face database show that the proposed algorithm performs superior to the well-known face recognition algorithms.

Original languageEnglish
Pages (from-to)630-636
Number of pages7
JournalIEEE Transactions on Information Forensics and Security
Issue number3
Publication statusPublished - Sept 2007
Externally publishedYes


Manuscript received May 1, 2007. This work was supported in part by the European Union under the integrated project CHIL, Computers in the Human Interaction Loop under Contract 506909, and in part by the German Research Foundation (DFG) as part of the Collaborative Research Center 588 Humanoid Robots-Learning and Cooperating Multimodal Robots. The associate editor co-ordinating the review of this manuscript and approving it for publication was Prof. Bir Bhanu.

FundersFunder number
Computers in the Human Interaction Loop506909
European Commission
Deutsche Forschungsgemeinschaft


    • 3-D face recognition
    • Automatic registration
    • Depth image
    • Local appearance face recognition


    Dive into the research topics of '3-D face recognition using local appearance-based models'. Together they form a unique fingerprint.

    Cite this