to download the project base paper stable diffusion project.

Abstract:

The landscape of image generation has been forever changed by open vocabulary diffusion models. However, at their core, these models use transformers, which makes generation slow. Better implementations to increase the throughput of these transformers have emerged, but they still evaluate the entire model. In this paper, we instead speed up diffusion models by exploiting natural redundancy in generated images by merging redundant tokens. After making some diffusion-specific improvements to Token Merging (ToMe), our ToMe for Stable Diffusion can reduce the number of tokens in an existing Stable Diffusion model by up to 60% while still producing high-quality images without any extra training. In the process, we speed up image generation by up to 2x and reduce memory consumption by up to 5.6x. Furthermore, this speed-up stacks with efficient implementations such as xFormers, minimally impacting quality while being up to 5.4x faster for large images.

With the rise of powerful diffusion models such as DALL, Imagen, and Stable Diffusion, generating high-quality images has never been easier. However, running these models can be expensive, especially for large images. All of these methods function by denoising images through several evaluations of a transformer [22] backbone, meaning that computation scales with the square of the number of tokens. The code is available at https://github.com/dbolya/tomesd.

TOKEN MERGING FOR FAST STABLE DIFFUSION, final year projects
Leave a Comment

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *