to download the project base paper of enhance image quality project.

Abstract:

We present SDXL enhance image quality project of advanced deep learning, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared to the previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large-model training and evaluation.

A major concern in the field of visual media creation is that while black-box models are often recognised as state-of-the-art, the opacity of their architecture prevents faith from fully assessing and validating their performance. This lack of transparency hampers reproducibility stifles innovation, and prevents the community from building upon these models to further the progress of science and art. Moreover, these closed-source strategies make it challenging to assess the biases and limitations of deployment. With SDXL Wear releasing an open model that achieves competitive performance with black-box image generation models we provide access to code and model weights at https://github.com/Stability-AI/generative-models

enhance image quality-SDXL: IMPROVING LATENT DIFFUSION MODELS FOR HIGH-RESOLUTION IMAGE SYNTHESIS
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *