Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

May I ask if someone has successfully deployed it locally? #56

Open
yiniesta opened this issue Oct 27, 2024 · 11 comments
Open

May I ask if someone has successfully deployed it locally? #56

yiniesta opened this issue Oct 27, 2024 · 11 comments

Comments

@yiniesta
Copy link

I have been working on it for about 3 days, constantly downloading repositories or models, or installing Python packages. But so far, it hasn't started running. I want to know if anyone has successfully deployed it, and the effect is related to the website( https://clarityai.co/dashboard )Are their functions similar?

@yiniesta
Copy link
Author

Building Docker image from environment in cog.yaml...
⚠ Stripping patch version from Python version 3.10.4 to 3.10
#0 building with "default" instance using docker driver

#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 1.58kB done
#1 DONE 0.0s

#2 resolve image config for docker-image://docker.io/docker/dockerfile:1.4
#2 DONE 5.7s

#3 docker-image://docker.io/docker/dockerfile:1.4@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc
#3 CACHED

#4 [internal] load .dockerignore
#4 transferring context: 369B done
#4 DONE 0.0s

#5 [internal] load metadata for r8.im/cog-base:cuda11.8-python3.10-torch2.0.1
#5 DONE 2.6s

#6 [stage-0 1/15] FROM r8.im/cog-base:cuda11.8-python3.10-torch2.0.1@sha256:9d031c3d28013b7de4c6bba259d22144a4e3426f1e535906632edf93d87f39f8
#6 DONE 0.0s

#7 [internal] load build context
#7 transferring context: 66.44kB done
#7 DONE 0.0s

#8 [stage-0 4/15] RUN --mount=type=cache,target=/root/.cache/pip pip install --no-cache-dir /tmp/cog-0.11.5-py3-none-any.whl 'pydantic<2'
#8 CACHED

#9 [stage-0 5/15] COPY .cog/tmp/build20241027113213.5034881362711260/requirements.txt /tmp/requirements.txt
#9 CACHED

#10 [stage-0 2/15] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked apt-get update -qq && apt-get install -qqy && rm -rf /var/lib/apt/lists/*
#10 CACHED

#11 [stage-0 3/15] COPY .cog/tmp/build20241027113213.5034881362711260/cog-0.11.5-py3-none-any.whl /tmp/cog-0.11.5-py3-none-any.whl
#11 CACHED

#12 [stage-0 6/15] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt
#12 CACHED

#13 [stage-0 7/15] RUN curl -o /usr/local/bin/pget -L "https://github.com/replicate/pget/releases/latest/download/pget_$(uname -s)_$(uname -m)"
#13 0.168 % Total % Received % Xferd Average Speed Time Time Time Current
#13 0.168 Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 8952k 100 8952k 0 0 63889 0 0:02:23 0:02:23 --:--:-- 76997
#13 DONE 143.7s

#14 [stage-0 8/15] RUN chmod +x /usr/local/bin/pget
#14 DONE 0.3s

#15 [stage-0 9/15] RUN git config --global --add safe.directory /src
#15 DONE 0.2s

#16 [stage-0 10/15] RUN git config --global --add safe.directory /src/extensions/sd-webui-controlnet
#16 DONE 0.2s

#17 [stage-0 11/15] RUN git config --global --add safe.directory /src/extensions/multidiffusion-upscaler-for-automatic1111
#17 DONE 0.2s

#18 [stage-0 12/15] RUN git clone https://github.com/philz1337x/stable-diffusion-webui-cog-init /stable-diffusion-webui
#18 0.145 Cloning into '/stable-diffusion-webui'...
#18 DONE 13.0s

#19 [stage-0 13/15] RUN python /stable-diffusion-webui/init_env.py --skip-torch-cuda-test
#19 0.417 COMMANDLINE_ARGS: --xformers
#19 0.441 fatal: No names found, cannot describe anything.
#19 0.441 Python 3.10.15 (main, Sep 9 2024, 23:28:08) [GCC 11.4.0]
#19 0.441 Version: 1.8.0-RC
#19 0.441 Commit hash: 601bdb0f2018669605869888144ecf06a513ad54
#19 0.444 Installing clip
#19 16.19 Installing open_clip
#19 33.03 Cloning assets into /stable-diffusion-webui/repositories/stable-diffusion-webui-assets...
#19 33.03 Cloning into '/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'...
#19 34.60 Cloning Stable Diffusion into /stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
#19 34.61 Cloning into '/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
#19 61.22 Cloning Stable Diffusion XL into /stable-diffusion-webui/repositories/generative-models...
#19 61.23 Cloning into '/stable-diffusion-webui/repositories/generative-models'...
#19 91.46 Cloning K-diffusion into /stable-diffusion-webui/repositories/k-diffusion...
#19 91.46 Cloning into '/stable-diffusion-webui/repositories/k-diffusion'...
#19 93.12 Cloning BLIP into /stable-diffusion-webui/repositories/BLIP...
#19 93.13 Cloning into '/stable-diffusion-webui/repositories/BLIP'...
#19 98.28 Installing requirements
#19 DONE 1342.0s

#20 [stage-0 14/15] RUN sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py
#20 0.179 sed: can't read /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py: No such file or directory
#20 ERROR: process "/bin/sh -c sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py" did not complete successfully: exit code: 2

[stage-0 14/15] RUN sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py:
0.179 sed: can't read /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py: No such file or directory


Dockerfile:19

17 | RUN git clone https://github.com/philz1337x/stable-diffusion-webui-cog-init /stable-diffusion-webui
18 | RUN python /stable-diffusion-webui/init_env.py --skip-torch-cuda-test
19 | >>> RUN sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py
20 | WORKDIR /src
21 | EXPOSE 5000

ERROR: failed to solve: process "/bin/sh -c sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py" did not complete successfully: exit code: 2
ⅹ Failed to build Docker image: exit status 1

@yiniesta
Copy link
Author

The above is my error log. I don't know how to solve it. I hope someone can help me. Thank you very much

@philz1337x
Copy link
Owner

Can you describe your setup and the commands you are using while you try to run it?

@philz1337x
Copy link
Owner

These are the steps I take to run it on a GPU:

  1. start a lambda lab GPU
  2. install cog
  3. clone repo
  4. run download-weights.py
  5. run the cog model

@yiniesta
Copy link
Author

Thank you for following my question.

I have reached the final step, executing the command 'cog predict -i image="test.jpg"',
I deleted the following command:

  • sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.4/lib/python3.10/site-packages/clip/clip.py

but another error occurred:
Building Docker image from environment in cog.yaml...
⚠ Stripping patch version from Python version 3.10.4 to 3.10
#0 building with "default" instance using docker driver
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 1.44kB done
#1 DONE 0.0s
#2 resolve image config for docker-image://docker.io/docker/dockerfile:1.4
#2 DONE 1.2s
#3 docker-image://docker.io/docker/dockerfile:1.4@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc
#3 CACHED
#4 [internal] load .dockerignore
#4 transferring context: 369B done
#4 DONE 0.0s
#5 [internal] load metadata for r8.im/cog-base:cuda11.8-python3.10-torch2.0.1
(clarity) root@ps:/usr/local/aigc/clarity-upscaler# more nohup.out
Building Docker image from environment in cog.yaml...
⚠ Stripping patch version from Python version 3.10.4 to 3.10
#0 building with "default" instance using docker driver
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 1.44kB done
#1 DONE 0.0s
#2 resolve image config for docker-image://docker.io/docker/dockerfile:1.4
#2 DONE 1.2s
#3 docker-image://docker.io/docker/dockerfile:1.4@sha256:9ba7531bd80fb0a858632727cf7a112fbfd19b17e94c4e84ced81e24ef1a0dbc
#3 CACHED
#4 [internal] load .dockerignore
#4 transferring context: 369B done
#4 DONE 0.0s
#5 [internal] load metadata for r8.im/cog-base:cuda11.8-python3.10-torch2.0.1
#5 DONE 2.4s
#6 [stage-0 1/14] FROM r8.im/cog-base:cuda11.8-python3.10-torch2.0.1@sha256:9d031c3d28013b7de4c6bba259d22144a4e3426f1e535906632edf93d87f39f8
#6 DONE 0.0s
#7 [internal] load build context
#7 transferring context: 66.44kB done
#7 DONE 0.0s
#8 [stage-0 5/14] COPY .cog/tmp/build20241028184635.0413383198327581/requirements.txt /tmp/requirements.txt
#8 CACHED
#9 [stage-0 7/14] RUN curl -o /usr/local/bin/pget -L "https://github.com/replicate/pget/releases/latest/download/pget_$(uname -s)_$(uname -m)"
#9 CACHED
#10 [stage-0 12/14] RUN git clone https://github.com/philz1337x/stable-diffusion-webui-cog-init /stable-diffusion-webui
#10 CACHED
#11 [stage-0 6/14] RUN --mount=type=cache,target=/root/.cache/pip pip install -r /tmp/requirements.txt
#11 CACHED
#12 [stage-0 8/14] RUN chmod +x /usr/local/bin/pget
#12 CACHED
#13 [stage-0 3/14] COPY .cog/tmp/build20241028184635.0413383198327581/cog-0.11.5-py3-none-any.whl /tmp/cog-0.11.5-py3-none-any.whl
#13 CACHED
#14 [stage-0 9/14] RUN git config --global --add safe.directory /src
#14 CACHED
#15 [stage-0 11/14] RUN git config --global --add safe.directory /src/extensions/multidiffusion-upscaler-for-automatic1111
#15 CACHED
#16 [stage-0 13/14] RUN python /stable-diffusion-webui/init_env.py --skip-torch-cuda-test
#16 CACHED
#17 [stage-0 10/14] RUN git config --global --add safe.directory /src/extensions/sd-webui-controlnet
#17 CACHED
#18 [stage-0 4/14] RUN --mount=type=cache,target=/root/.cache/pip pip install --no-cache-dir /tmp/cog-0.11.5-py3-none-any.whl 'pydantic<2'
#18 CACHED
#19 [stage-0 2/14] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked apt-get update -qq && apt-get install -qqy && rm -rf /var/lib/apt/lists/*
#19 CACHED
#20 [stage-0 14/14] WORKDIR /src
#20 CACHED
#21 exporting to image
#21 exporting layers done
#21 preparing layers for inline cache 0.0s done
#21 writing image sha256:149bd0a8b19fde48eb59da36ca01e3fd082639fb63873bdbe4f3f377dc2c1fb8 done
#21 naming to docker.io/library/cog-clarity-upscaler-base done
#21 DONE 0.0s

Starting Docker image cog-clarity-upscaler-base and running setup()...
docker: Error response from daemon: could not select device driver "" with capabilities: [[gpu]].
Missing device driver, re-trying without GPU
Error response from daemon: page not found
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/timm/models/layers/init.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please imp
ort via timm.layers
warnings.warn(f"Importing from {name} is deprecated, please import via timm.layers", FutureWarning)
import_hook.py tried to disable xformers, but it was not requested. Ignoring
Style database not found: /src/styles.csv
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/in
dex.aspx', memory monitor disabled
ControlNet preprocessor location: /src/extensions/sd-webui-controlnet/annotator/downloads
2024-10-28 10:46:48,785 - ControlNet - INFO - ControlNet v1.1.440
2024-10-28 10:46:48,955 - ControlNet - INFO - ControlNet v1.1.440
Loading weights [None] from /src/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors
Available checkpoints: [{'title': 'epicrealism_naturalSinRC1VAE.safetensors', 'model_name': 'epicrealism_naturalSinRC1VAE', 'hash': None, 'sha256': None, 'filename': '
/src/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors', 'config': None}, {'title': 'flat2DAnimerge_v45Sharp.safetensors', 'model_name': 'flat2DAnimerge_
v45Sharp', 'hash': None, 'sha256': None, 'filename': '/src/models/Stable-diffusion/flat2DAnimerge_v45Sharp.safetensors', 'config': None}, {'title': 'juggernaut_reborn.s
afetensors', 'model_name': 'juggernaut_reborn', 'hash': None, 'sha256': None, 'filename': '/src/models/Stable-diffusion/juggernaut_reborn.safetensors', 'config': None}]
2024-10-28 10:46:49,233 - ControlNet - INFO - ControlNet UI callback registered.
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
fatal: not a git repository (or any parent up to mount point /)
Stopping at filesystem boundary (GIT_DISCOVERY_ACROSS_FILESYSTEM not set).
Creating model from config: /src/configs/v1-inference.yaml
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/huggingface_hub/file_download.py:797: FutureWarning: resume_download is deprecated and will be removed in v
ersion 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
creating model quickly: OSError
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/src/modules/initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "/src/modules/shared_items.py", line 128, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/src/modules/sd_models.py", line 531, in get_sd_model
load_model()
File "/src/modules/sd_models.py", line 635, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.
Failed to create model quickly; will retry using slow method.
loading stable diffusion model: OSError
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/src/modules/initialize.py", line 147, in load_model
shared.sd_model # noqa: B018
File "/src/modules/shared_items.py", line 128, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/src/modules/sd_models.py", line 531, in get_sd_model
load_model()
File "/src/modules/sd_models.py", line 644, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.
Stable diffusion model failed to load
Applying attention optimization: InvokeAI... done.
Loading weights [None] from /src/models/Stable-diffusion/epicrealism_naturalSinRC1VAE.safetensors
Creating model from config: /src/configs/v1-inference.yaml
Exception in thread Thread-5 (load_model):
Traceback (most recent call last):
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/src/modules/initialize.py", line 153, in load_model
devices.first_time_calculation()
File "/src/modules/devices.py", line 162, in first_time_calculation
linear(x)
TypeError: 'NoneType' object is not callable
creating model quickly: OSError
Traceback (most recent call last):
File "", line 1, in
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/spawn.py", line 129, in _main
return self._bootstrap(parent_sentinel)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 302, in run
self._setup(redirector)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 335, in _setup
run_setup(self._predictor)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/predictor.py", line 75, in run_setup
predictor.setup()
File "/src/predict.py", line 143, in setup
self.api.img2imgapi(req)
File "/src/modules/api/api.py", line 431, in img2imgapi
with closing(StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args)) as p:
File "/src/modules/shared_items.py", line 128, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/src/modules/sd_models.py", line 531, in get_sd_model
load_model()
File "/src/modules/sd_models.py", line 635, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.
Failed to create model quickly; will retry using slow method.
loading stable diffusion model: OSError
Traceback (most recent call last):
File "", line 1, in
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/spawn.py", line 129, in _main
return self._bootstrap(parent_sentinel)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 302, in run
self._setup(redirector)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 335, in _setup
run_setup(self._predictor)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/predictor.py", line 75, in run_setup
predictor.setup()
File "/src/predict.py", line 143, in setup
self.api.img2imgapi(req)
File "/src/modules/api/api.py", line 431, in img2imgapi
with closing(StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args)) as p:
File "/src/modules/shared_items.py", line 128, in sd_model
return modules.sd_models.model_data.get_sd_model()
File "/src/modules/sd_models.py", line 531, in get_sd_model
load_model()
File "/src/modules/sd_models.py", line 644, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.
Stable diffusion model failed to load
Loading weights [None] from /src/models/Stable-diffusion/juggernaut_reborn.safetensors
Creating model from config: /src/configs/v1-inference.yaml
creating model quickly: OSError
Traceback (most recent call last):
File "", line 1, in
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/spawn.py", line 129, in _main
return self._bootstrap(parent_sentinel)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 302, in run
self._setup(redirector)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 335, in _setup
run_setup(self._predictor)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/predictor.py", line 75, in run_setup
predictor.setup()
File "/src/predict.py", line 143, in setup
self.api.img2imgapi(req)
File "/src/modules/api/api.py", line 445, in img2imgapi
processed = process_images(p)
File "/src/modules/processing.py", line 727, in process_images
sd_models.reload_model_weights()
File "/src/modules/sd_models.py", line 784, in reload_model_weights
load_model(checkpoint_info, already_loaded_state_dict=state_dict)
File "/src/modules/sd_models.py", line 635, in load_model
sd_model = instantiate_from_config(sd_config.model)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 563, in init
self.instantiate_cond_stage(cond_stage_config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 630, in instantiate_cond_stage
model = instantiate_from_config(config)
File "/src/repositories/stable-diffusion-stability-ai/ldm/util.py", line 89, in instantiate_from_config
return get_obj_from_str(config["target"])(**config.get("params", dict()))
File "/src/repositories/stable-diffusion-stability-ai/ldm/modules/encoders/modules.py", line 103, in init
self.tokenizer = CLIPTokenizer.from_pretrained(version)
File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1809, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.
Failed to create model quickly; will retry using slow method.
Loading weights [None] from /src/models/Stable-diffusion/juggernaut_reborn.safetensors
Creating model from config: /src/configs/v1-inference.yaml
ⅹ Timed out

@philz1337x
Copy link
Owner

could you describe the steps you took

@yiniesta
Copy link
Author

I have successfully executed "python download_weights.py"

and install cog:
sudo curl -o /usr/local/bin/cog -L https://github.com/replicate/cog/releases/latest/download/cog_`uname -s_uname -m`
sudo chmod +x /usr/local/bin/cog

and then execute:
cog predict -i image="test.jpg"

That's all

@philz1337x
Copy link
Owner

Never had this bug, but I would try to fix this:

OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local
directory with the same name. Otherwise, make sure 'openai/clip-vit-large-patch14' is the correct path to a directory containing all relevant files for a CLIPTokenizer
tokenizer.

@unkn-wn
Copy link

unkn-wn commented Nov 3, 2024

Have the exact same problem issue haha, but I'm using WSL on Windows 11, so I'm thinking that could be the issue (cog is supposed to be for mac/linux?)

@unkn-wn
Copy link

unkn-wn commented Nov 3, 2024

Okay, fixed that issue by changing pyenv version in cog.yaml:

sed -i 's/from pkg_resources import packaging/import packaging/g' /root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/clip/clip.py

But now I'm getting an entirely different issue:

Starting Docker image cog-clarity-upscaler-base and running setup()...
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
  warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
import_hook.py tried to disable xformers, but it was not requested. Ignoring
Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 500: named symbol not found: str
Traceback (most recent call last):
  File "/src/modules/errors.py", line 98, in run
    code()
  File "/src/modules/devices.py", line 76, in enable_tf32
    device_id = (int(shared.cmd_opts.device_id) if shared.cmd_opts.device_id is not None and shared.cmd_opts.device_id.isdigit() else 0) or torch.cuda.current_device()
  File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/cuda/__init__.py", line 674, in current_device
    _lazy_init()
  File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/torch/cuda/__init__.py", line 247, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 500: named symbol not found

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/worker.py", line 332, in _setup
    run_setup(self._predictor)
  File "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/predictor.py", line 75, in run_setup
    predictor.setup()
  File "/src/predict.py", line 46, in setup
    initialize.imports()
  File "/src/modules/initialize.py", line 34, in imports
    shared_init.initialize()
  File "/src/modules/shared_init.py", line 17, in initialize
    from modules import options, shared_options
  File "/src/modules/shared_options.py", line 3, in <module>
    from modules import localization, ui_components, shared_items, shared, interrogate, shared_gradio_themes
  File "/src/modules/interrogate.py", line 13, in <module>
    from modules import devices, paths, shared, lowvram, modelloader, errors
  File "/src/modules/devices.py", line 84, in <module>
    errors.run(enable_tf32, "Enabling TF32")
  File "/src/modules/errors.py", line 100, in run
    display(task, e)
  File "/src/modules/errors.py", line 68, in display
    te = traceback.TracebackException.from_exception(e)
  File "/root/.pyenv/versions/3.10.15/lib/python3.10/traceback.py", line 572, in from_exception
    return cls(type(exc), exc, exc.__traceback__, *args, **kwargs)
AttributeError: 'str' object has no attribute '__traceback__'
{"logger": "cog.server.runner", "timestamp": "2024-11-03T08:35:00.387164Z", "exception": "Traceback (most recent call last):\n  File \"/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/cog/server/runner.py\", line 223, in _handle_done\n    f.result()\n  File \"/root/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py\", line 451, in result\n    return self.__get_result()\n  File \"/root/.pyenv/versions/3.10.15/lib/python3.10/concurrent/futures/_base.py\", line 403, in __get_result\n    raise self._exception\ncog.server.exceptions.FatalWorkerException: Predictor errored during setup: 'str' object has no attribute '__traceback__'", "severity": "ERROR", "message": "caught exception while running setup"}
{"logger": "cog.server.http", "timestamp": "2024-11-03T08:35:00.388297Z", "exception": "Exception: setup failed", "severity": "ERROR", "message": "encountered fatal error"}
{"logger": "cog.server.http", "timestamp": "2024-11-03T08:35:00.388547Z", "severity": "ERROR", "message": "shutting down immediately"}
ⅹ Failed to get container status: exit status 1

@philz1337x you got any idea how to fix this?

@unkn-wn
Copy link

unkn-wn commented Nov 4, 2024

Fixed the issue! Just had to update docker, and now everything runs locally.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants