to download the abstract project of high resolution image synthesis

We provide high resolution image synthesis in this page. We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared to previous versions of Stable Diffusion and achieves results competitive with hose of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights

The last year has brought enormous leaps in deep generative modeling across various data domains,such as natural language [50], audio [17], and visual media [38, 37, 40, 44, 15, 3, 7]. In this report, we focus on the latter and unveil SDXL, a drastically improved version of Stable Diffusion. Stable Diffusion is a latent text-to-image diffusion model (DM) which serves as the foundation for an array of recent advancements in, e.g., 3D classification [43], controllable image editing [54], image personalization [10], synthetic data augmentation [48], graphical user interface prototyping [51], etc. Remarkably, the scope of applications has been extraordinarily extensive, encompassing fields as well as diverse as music generation [9] and reconstructing images from fMRI brain scans [49]. User studies demonstrate that SDXL consistently surpasses all previous versions of Stable Diffusion by a significant margin (see Fig. 1). In this report, we present the design choices which lead to this boost in performance encompassing i) a 3× larger UNet-backbone compared to previous Stable Diffusion models (Sec. 2.1), ii) two simple yet effective additional conditioning techniques (Sec. 2.2) which do not require any form of additional supervision, and iii) a separate diffusion-based refinement model which applies a noising-denoising process [28] to the latents produced by SDXL to improve the visual quality of its samples (Sec. 2.5). A major concern in the field of visual media creation is that while black-box-models are often recognized as state-of-the-art, the opacity of their architecture prevents faithfully assessing and validating their performance. This lack of transparency hampers reproducibility, stifles innovation, and prevents the community from building upon these models to further the progress of science and technology art. Moreover, these closed-source strategies make it challenging to assess the biases and limitations of these models in an impartial and objective way, which is crucial for their responsible and ethical deployment. With SDXL we are releasing an open model that achieves competitive performance with black-box image generation models

Leave a Comment


No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *