3-D mesh geometry compression with set partitioning in the spectral domain

Ulug Bayazit*, Umut Konur, Hasan Fehmi Ates

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

10 Citations (Scopus)

Abstract

This paper explains the development of a highly efficient progressive 3-D mesh geometry coder based on the region adaptive transform in the spectral mesh compression method. A hierarchical set partitioning technique, originally proposed for the efficient compression of wavelet transform coefficients in high-performance wavelet-based image coding methods, is proposed for the efficient compression of the coefficients of this transform. Experiments confirm that the proposed coder employing such a region adaptive transform has a high compression performance rarely achieved by other state of the art 3-D mesh geometry compression algorithms. A new, high-performance fixed spectral basis method is also proposed for reducing the computational complexity of the transform. Many-to-one mappings are employed to relate the coded irregular mesh region to a regular mesh whose basis is used. To prevent loss of compression performance due to the low-pass nature of such mappings, transitions are made from transform-based coding to spatial coding on a per region basis at high coding rates. Experimental results show the performance advantage of the newly proposed fixed spectral basis method over the original fixed spectral basis method in the literature that employs one-to-one mappings.

Original languageEnglish
Article number5159432
Pages (from-to)179-188
Number of pages10
JournalIEEE Transactions on Circuits and Systems for Video Technology
Volume20
Issue number2
DOIs
Publication statusPublished - Feb 2010

Keywords

  • Computer graphics
  • Data compression
  • Data visualization
  • Transform coding
  • Virtual reality

Fingerprint

Dive into the research topics of '3-D mesh geometry compression with set partitioning in the spectral domain'. Together they form a unique fingerprint.

Cite this