Abstract
Disentangled representation learning aims to represent the underlying generative factors of a dataset in a latent representation independently of one another. In our work, we propose a discrete variational autoencoder (VAE) based model where the ground truth information about the generative factors not provided to the model. We demonstrate the advantages of learning discrete representations over learning continuous representations in facilitating disentanglement. Furthermore, we propose incorporating an inductive bias into the model to further enhance disentanglement. Precisely, we propose scalar quantization of the latent variables in a latent representation with scalar values from a global codebook, and we add a total correlation term to the optimization as an inductive bias. Our method called FactorQVAE combines optimization based disentanglement approaches with discrete representation learning, and it outperforms previous disentanglement methods in terms of two disentanglement metrics (DCI and InfoMEC) while improving the reconstruction performance. Our code can be found at https://github.com/ituvisionlab/FactorQVAE.
| Original language | English |
|---|---|
| Article number | 131968 |
| Journal | Neurocomputing |
| Volume | 661 |
| DOIs | |
| Publication status | Published - 14 Jan 2026 |
Bibliographical note
Publisher Copyright:© 2025 The Author(s)
Keywords
- Discrete representation learning
- Disentanglement
- Vector quantized variational autoencoders