Skip to content

Latest commit

 

History

History
76 lines (52 loc) · 2.31 KB

README.md

File metadata and controls

76 lines (52 loc) · 2.31 KB

Sana for webui

only tested with Forge2

I don't think there is anything Forge specific here.

works for me TM on 8GB VRAM, 16GB RAM (GTX1070)


Install

Go to the Extensions tab, then Install from URL, use the URL for this repository.

needs updated diffusers

Easiest way to ensure necessary versions are installed is to edit requirements_versions.txt in the webUI folder.

diffusers>=0.32.0
accelerate>=0.26.0

downloads models on demand - minimum will be ~9GB


Note

if noUnload is selected then models are kept in memory; otherwise reloaded for each run. The unload models button removes them from memory.


current UI screenshot


Change log

07/02/2025

  • improved image2image / inpainting

31/01/2025

  • switched img2img to use ForgeCanvas, if installed in Forge2. Gradio4 ImageEditor is bugged, consumes GPU or CPU constantly.

17/01/2025

  • add option for alternative CFG calculation. '0' button toggle, +50% inference time, PAG takes priority.

11/01/2025

  • add 4K model (unlikely to work until diffusers adds VAE tiling);
  • changes to model loading so VAE only downloaded once regardless of how many models are used;
  • and should not download the fp32 transformer models anymore (not sure why pipeline.from_pretrained() loading ignored the variant specified, but loading the transformer separately avoids the issue).

01/01/2025

  • add initial sampler selection, not sure how many will work yet. Euler and Heun need more steps than DPM++ 2M;
  • add rescale CFG, can be very effective.

26/12/2024

  • fixes for gallery, sending to i2i.

25/12/2024 (2)

  • add complex human instruction toggle (CHI button), for automatic prompt enhancement;
  • avoid unnecessary text encoder load if prompt hasn't changed.

25/12/2024

  • add control of shift parameter. From initial tests doesn't seem as useful as with Flux or SD3.

24/12/2024 (2)

  • added PAG and some sort of i2i.

24/12/2024

  • first implemention. 2K models need ~16GB VRAM for VAE.

example: