QCompere @ REPERE 2013

Hervé Bredin*, Johann Poignant, Guillaume Fortier, Makarand Tapaswi, Viet Bac Le, Anindya Roy, Claude Barras, Sophie Rosset, Achintya Sarkar, Qian Yang, Hua Gao, Alexis Mignon, Jakob Verbeek, Laurent Besacier, Georges Quénot, Hazim Kemal Ekenel, Rainer Stiefelhagen

*Corresponding author for this work

Research output: Contribution to journalConference articlepeer-review

6 Citations (Scopus)

Abstract

We describe QCompere consortium submissions to the REPERE 2013 evaluation campaign. The REPERE challenge aims at gathering four communities (face recognition, speaker identification, optical character recognition and named entity detection) towards the same goal: multimodal person recognition in TV broadcast. First, four mono-modal components are introduced (one for each foregoing community) constituting the elementary building blocks of our various submissions. Then, depending on the target modality (speaker or face recognition) and on the task (supervised or unsupervised recognition), four different fusion techniques are introduced: they can be summarized as propagation-, classifier-, rule- or graph-based approaches. Finally, their performance is evaluated on REPERE 2013 test set and their advantages and limitations are discussed.

Original languageEnglish
Pages (from-to)49-54
Number of pages6
JournalCEUR Workshop Proceedings
Volume1012
Publication statusPublished - 2013
Externally publishedYes
Event1st Workshop on Speech, Language and Audio in Multimedia, SLAM 2013 - Marseille, France
Duration: 22 Aug 201323 Aug 2013

Bibliographical note

Publisher Copyright:
Copyright © 2013 for the individual papers by the papers' authors.

Keywords

  • Face recognition
  • Multimodal fusion
  • Named entity detection
  • Speaker identification
  • Video optical character recognition

Fingerprint

Dive into the research topics of 'QCompere @ REPERE 2013'. Together they form a unique fingerprint.

Cite this