Decoupled Latent Diffusion Model for Enhancing Image Generation
Latent Diffusion Models have emerged as an efficient alternative to conventional diffusion approaches by compressing high-dimensional images into a lower-dimensional latent space using a Variational Autoencoder (VAE) and performing diffusion in that space. In standard Latent Diffusion Model (LDM), t...
Saved in:
Main Authors: | Hyun-Tae Choi, Kensuke Nakamura, Byung-Woo Hong |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/11091282/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Remote Sensing Image Semantic Segmentation Sample Generation Using a Decoupled Latent Diffusion Framework
by: Yue Xu, et al.
Published: (2025-06-01) -
Spatial Compression Methods for Latent Diffusion Models
by: Vladimir Abramov, et al.
Published: (2025-04-01) -
Dual-Stream Contrastive Learning for Medical Visual Representations Using Synthetic Images Generated by Latent Diffusion Model
by: Weitao Ye, et al.
Published: (2025-01-01) -
LatentPINNs: Generative physics-informed neural networks via a latent representation learning
by: Mohammad H. Taufik, et al.
Published: (2025-06-01) -
Regularization for Unconditional Image Diffusion Models via Shifted Data Augmentation
by: Kensuke Nakamura, et al.
Published: (2025-01-01)