A video-based door monitoring system using local appearance-based face models

Hazim Kemal Ekenel*, Johannes Stallkamp, Rainer Stiefelhagen

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

24 Citations (Scopus)

Abstract

In this paper, we present a real-time video-based face recognition system. The developed system identifies subjects while they are entering a room. This application scenario poses many challenges. Continuous, uncontrolled variations of facial appearance due to illumination, pose, expression, and occlusion of non-cooperative subjects need to be handled to allow for successful recognition. In order to achieve this, the system first detects and tracks the eyes for proper registration. The registered faces are then individually classified by a local appearance-based face recognition algorithm. The obtained confidence scores from each classification are progressively combined to provide the identity estimate of the entire sequence. We introduce three different measures to weight the contribution of each individual frame to the overall classification decision. They are distance-to-model (DTM), distance-to-second-closest (DT2ND), and their combination. We have conducted closed-set and open-set identification experiments on a database of 41 subjects. The experimental results show that the proposed system is able to reach high correct recognition rates. Besides, it is able to perform facial feature and face detection, tracking, and recognition in real-time.

Original languageEnglish
Pages (from-to)596-608
Number of pages13
JournalComputer Vision and Image Understanding
Volume114
Issue number5
DOIs
Publication statusPublished - May 2010
Externally publishedYes

Keywords

  • Discrete cosine transform
  • Door monitoring
  • Face detection
  • Face recognition from video
  • Feature tracking
  • Fusion

Fingerprint

Dive into the research topics of 'A video-based door monitoring system using local appearance-based face models'. Together they form a unique fingerprint.

Cite this