Abstract
Segmenting image sequences into meaningful layers is fundamental to many applications such as surveillance, tracking, and video summarization. Background subtraction techniques are popular for their simplicity and, while they provide a dense (pixelwise) estimate of foreground/background, they typically ignore image motion which can provide a rich source of information about scene structure. Conversely, layered motion estimation techniques typically ignore the temporal persistence of image appearance and provide parametric (rather than dense) estimates of optical flow. Recent work adaptively combines motion and appearance estimation in a mixture model framework to achieve robust tracking. Here we extend mixture model approaches to cope with dense motion and appearance estimation. We develop a unified Bayesian framework to simultaneously estimate the appearance of multiple image layers and their corresponding dense flow fields from image sequences. Both the motion and appearance models adapt over time and the probabilistic formulation can be used to provideasegmentation of thescene into foreground/background regions. This extension of mixture models includes prior probability models for the spatial and temporal coherence of motion and appearance. Experimental results show that the simultaneous estimation of appearance models and flow fields in multiple layers improves the estimation of optical flow at motion boundaries.
Original language | English |
---|---|
Article number | 1384964 |
Journal | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
Volume | 2004-January |
Issue number | January |
DOIs | |
Publication status | Published - 2004 |
Externally published | Yes |
Event | 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2004 - Washington, United States Duration: 27 Jun 2004 → 2 Jul 2004 |
Bibliographical note
Publisher Copyright:© 2004 IEEE.