Integration of regularized l1 tracking and instance segmentation for video object tracking

Filiz Gurkan, Bilge Gunsel

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

We introduce a tracking-by-detection method that integrates a deep object detector with a particle filter tracker under the regularization framework where the tracked object is represented by a sparse dictionary. A novel observation model which establishes consensus between the detector and tracker is formulated that enables us to update the dictionary with the guidance of the deep detector. This yields an efficient representation of the object appearance through the video sequence hence improves robustness to occlusion and pose changes. The proposed tracker employs a 7D affine state vector formulated to output deformed object bounding boxes that significantly increases robustness to scale changes. Performance evaluation has been carried out on a subset of challenging VOT2016 and VOT2018 benchmarking video sequences for 80 object classes of COCO. Numerical results demonstrate that the introduced tracker, L1DPF-M, achieves comparable robustness while it outperforms state-of-the-art trackers in success rate where the improvement achieved at IoU-th = 0.5 on the used VOT2016 and VOT2018 sequences is 11% and 9%, respectively.

Original languageEnglish
Pages (from-to)284-300
Number of pages17
JournalNeurocomputing
Volume423
DOIs
Publication statusPublished - 29 Jan 2021

Bibliographical note

Publisher Copyright:
© 2020 Elsevier B.V.

Keywords

  • Deep object detector
  • Object tracking
  • Regularized particle filtering
  • Sparse representation

Fingerprint

Dive into the research topics of 'Integration of regularized l1 tracking and instance segmentation for video object tracking'. Together they form a unique fingerprint.

Cite this