-
-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(memory?) issue with stable_diffusion_v2_1_webui_colab when mounting Google Drive #21
Comments
--force-enable-xformers obsolete please use --xformers |
Thanks for the quick reply, @camenduru The issue is not related to the xformers module (even if it appeared in my quoted output).
This issue exclusively happens with this particular notebook (again, I don't have it with, for example, the Analog Diffusion module), and only if I try to mount my Google Drive before running the A1111 installation cell. |
this is working https://github.com/camenduru/stable-diffusion-webui-colab/blob/main/stable_diffusion_v2_1_webui_colab.ipynb |
wait it is not working with gdrive interesting 😋 |
I change nothing from your code. All I do is:
I do exactly these steps with your colab for Analog Diffusion and it works flawlessly, as expected. I have no clue why there's this difference in behaviour. |
if we convert |
Had the same problem, figured out a workaround fix — crash the colab right before launching the UI, this will free up the RAM Do this after downloading the models:
This will crash the runtime.
|
thanks @MitPitt ❤ good idea 🤩 |
Thanks, @MitPitt, but I still can't make it work. I split @camenduru's original notebook into multiple cells as in the screenshot. I executed your recommended Then I proceed launching A1111, but I still run out of system RAM. What am I doing wrong? |
Google drive is taking RAM as well, I had this problem. You will have to download any needed files manually, without mounting the drive.
And you can find your file's ID by looking at the share link: |
hi @system1system2 👋 I converted to fp16 now 2.58 GB please use this with gdrive https://huggingface.co/ckpt/stable-diffusion-2-1/resolve/main/v2-1_768-ema-pruned-fp16.ckpt |
Thank you so much for converting this. Unfortunately, I still have issues: I have modified your colab notebook to download the correct file and save it with the old file name, so I don't have to rename the yaml file as well:
It correctly downloads the half-precision variant (which is saved in the Colab drive as a 2.4G file), but then it insists in loading a 3.4GB file: and that's where it runs out of memory as usual:
Also notice that during the process, python raises a new error:
I don't know if it's important or not as I cannot load the UI to test image generation. |
at this point, I am thinking that there may be a memory leak in the code 🤔 |
Agree with all above. I tried just installing the WebUI without connecting my drive. It died the same death as described above.
And I switched to this latest version because I can't fix the "RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu)" on the older version that I used to be able to fix. (Now, none of the suggested edits to the ddpm file work.) I would so dearly love to train more embeddings but I can't seem to find a version that runs for me on Colab (with a paid account.) Edit: But I did get the WebUI running from midjourney_v4_diffusion_webui_colab.ipynb before attaching the Google Drive. (Now trying to mount the drive does nothing. No pop-up, no error message, no mount.) And also the runtime error about indices is still a problem. I am really sad about this. |
Forking and patching the stablediffusion repository of Stability-AI will bring it within 12 GB. ddPn08/automatic1111-colab#16 |
@thx-pw さん、ありがとうございます。 ❤ ❤ |
|
@thx-pwさん、こちらも動作しています |
ご確認ください |
You can do it in one line. |
I am running. So far, sed is throwing "no such file or directory" errors (for both sed calls). Edit: but apparently it doesn't matter? I couldn't mount my Google drive, but I just uploaded my training images and am now training an embedding. |
hi @MisoSpree 👋 sed is working we are getting this message because we are using sed inside sed before getting the file from repo little trick hehe
|
Roger that. Ignoring error messages is right up my alley. |
Note that when training an embedding, the loss is reported as a NaN: [Epoch 499: 10/10]loss: nan: 10% 4999/50000 [45:34<6:49:13, 1.83it/s] And the image put out every N steps is just black. Looks like something is broken still. |
oh no 😐 |
@camenduru believe or not, it woks (at least for ordinary txt2img generations - I didn't try to train an embedding like @MisoSpree). The sed weird trick worked, but you might want to say something about it in the documentation or you'll have an avalanche of people reporting the same Thanks for the patience in fixing this. I'm training without any issues this morning thanks to you. |
In my environment, I had no problem learning embedding. |
hi @ddPn08 can you train without black example output? please show us how |
I tried, and I also got a black output 😭 |
I created embedding from the train tab of AUTOMATIC1111 and trained without changing any settings. |
I am glad I am not the only one. Were you seeing loss reported as NaN? Edit: Just to check, I did this again today. This is in Colab. Today I first connected my Google Drive. (This is different from the last time when I didn't connect the google drive at all.) Then I ran https://github.com/camenduru/stable-diffusion-webui-colab/blob/main/stable_diffusion_v2_1_webui_colab.ipynb. Everything installed. I generated a single text-to-image (which I always do as a test when I get the WebUI open.) That worked fine. Then I created an embedding and ran the training. Still, loss is being reported as NaN and the first output image was all black. Then I stopped the training. |
Even after removing the low RAM patch, I am still getting the nan error in learning. |
Just a quick note to let you know, @camenduru, that this new version of the notebook runs out of memory again :) The problem is the single If you replace it with the previous two lines below, the notebook works just fine, including triton installation and the new CivitAI extension:
|
I tested it this one and it worked with gdrive I didn't change anything maybe you are getting less ram I got 12.68GB |
Same amount. Not sure why it works with the two sed lines but fails with a single one. |
Hi. All your colab notebooks are amazing. Thanks for sharing them with the community.
I have a problem with one of them:
stable_diffusion_v2_1_webui_colab
If I create a new cell to mount my Google Drive and run it before your cell to initialize SD2.1, the initialization interrupts half way and I get this output:
I read that the
^C
interrupt might indicate the system has run out of memory.If I do not run the cell that mounts Google Drive, everything works fine.
Also, and this is where it's strange, if I run another of your Colab notebooks, like analog_diffusion_webui_colab, by running the cell that mounts Google Drive first, everything works fine, too.
The text was updated successfully, but these errors were encountered: