Disentanglement with factor quantized variational autoencoders

Gulcin Baykal*, Melih Kandemir, Gozde Unal

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Disentangled representation learning aims to represent the underlying generative factors of a dataset in a latent representation independently of one another. In our work, we propose a discrete variational autoencoder (VAE) based model where the ground truth information about the generative factors not provided to the model. We demonstrate the advantages of learning discrete representations over learning continuous representations in facilitating disentanglement. Furthermore, we propose incorporating an inductive bias into the model to further enhance disentanglement. Precisely, we propose scalar quantization of the latent variables in a latent representation with scalar values from a global codebook, and we add a total correlation term to the optimization as an inductive bias. Our method called FactorQVAE combines optimization based disentanglement approaches with discrete representation learning, and it outperforms previous disentanglement methods in terms of two disentanglement metrics (DCI and InfoMEC) while improving the reconstruction performance. Our code can be found at https://github.com/ituvisionlab/FactorQVAE.

Original languageEnglish
Article number131968
JournalNeurocomputing
Volume661
DOIs
Publication statusPublished - 14 Jan 2026

Bibliographical note

Publisher Copyright:
© 2025 The Author(s)

Keywords

  • Discrete representation learning
  • Disentanglement
  • Vector quantized variational autoencoders

Fingerprint

Dive into the research topics of 'Disentanglement with factor quantized variational autoencoders'. Together they form a unique fingerprint.

Cite this