Decoupled Latent Diffusion Model for Enhancing Image Generation

Latent Diffusion Models have emerged as an efficient alternative to conventional diffusion approaches by compressing high-dimensional images into a lower-dimensional latent space using a Variational Autoencoder (VAE) and performing diffusion in that space. In standard Latent Diffusion Model (LDM), t...

Full description

Saved in:
Bibliographic Details
Main Authors: Hyun-Tae Choi, Kensuke Nakamura, Byung-Woo Hong
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/11091282/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Latent Diffusion Models have emerged as an efficient alternative to conventional diffusion approaches by compressing high-dimensional images into a lower-dimensional latent space using a Variational Autoencoder (VAE) and performing diffusion in that space. In standard Latent Diffusion Model (LDM), the latent code is formed by sampling from a Gaussian distribution (i.e., combining both the mean and the standard deviation), which helps regularize the latent space but appears to contribute little beyond the deterministic component. Motivated by recent empirical observations that the decoder relies primarily on the latent mean, our work reexamines this paradigm and proposes a decoupled latent diffusion model that focuses on a simplified latent representation. Specifically, we compare three configurations: (i) the standard latent code, (ii) a concatenated representation that explicitly preserves both mean and variance, and (iii) a deterministic mean-only representation. Our extensive experiments on multiple benchmark datasets demonstrate that, when compared to the standard approach, the mean-only configuration not only maintains but in many cases improves synthesis quality by producing sharper and more coherent images while reducing unnecessary noise. These findings suggest that a simplified, deterministic latent representation can yield more stable and efficient generative models, challenging the conventional reliance on latent sampling in diffusion-based image synthesis.
ISSN:2169-3536