Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When I execute on ComfyUI, reports OutOfMemoryError #30

Open
hxinsss opened this issue Sep 29, 2024 · 3 comments
Open

When I execute on ComfyUI, reports OutOfMemoryError #30

hxinsss opened this issue Sep 29, 2024 · 3 comments

Comments

@hxinsss
Copy link

hxinsss commented Sep 29, 2024

FETCH DATA from: /data/flux/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json [DONE]
got prompt
/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/functional.py:513: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1724789560443/work/aten/src/ATen/native/TensorShape.cpp:3609.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
final text_encoder_type: bert-base-uncased
/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/_dynamo/eval_frame.py:600: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants.
return fn(*args, **kwargs)
/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/utils/checkpoint.py:92: UserWarning: None of the inputs have requires_grad=True. Gradients will be None
warnings.warn(
# dzNodes: LayerStyle -> SegmentAnythingUltra V2 Processed 1 image(s).
0%| | 0/20 [00:00<?, ?it/s]
!!! Exception during processing !!! Allocation on device
Traceback (most recent call last):
File "/data/flux/ComfyUI/execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/flux/ComfyUI/execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/flux/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/data/flux/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/data/flux/ComfyUI/custom_nodes/ComfyUI_CatVTON_Wrapper/py/cat_vton.py", line 79, in catvton
result_image = pipeline(
^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/data/flux/ComfyUI/custom_nodes/ComfyUI_CatVTON_Wrapper/py/catvton/pipeline.py", line 163, in call
noise_pred= self.unet(
^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1216, in forward
sample, res_samples = downsample_block(
^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 1288, in forward
hidden_states = attn(
^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/models/transformers/transformer_2d.py", line 442, in forward
hidden_states = block(
^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/models/attention.py", line 466, in forward
attn_output = self.attn1(
^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/anaconda3/envs/comfyui/lib/python3.12/site-packages/diffusers/models/attention_processor.py", line 490, in forward
return self.processor(
^^^^^^^^^^^^^^^
File "/data/flux/ComfyUI/custom_nodes/ComfyUI_CatVTON_Wrapper/py/catvton/attn_processor.py", line 88, in call
hidden_states = F.scaled_dot_product_attention(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: Allocation on device

Got an OOM, unloading all loaded models.
Prompt executed in 29.86 seconds
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json
[ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json

@chflame163
Copy link
Owner

Using the latest example workflow.

@hxinsss
Copy link
Author

hxinsss commented Sep 29, 2024

Yes, I make sure I used the latest example workflow downloaded in ComfyUI_CatVTON_Wrapper/workflow
But it doesn't work
image

@hxinsss hxinsss closed this as completed Sep 29, 2024
@hxinsss hxinsss reopened this Sep 29, 2024
@txhno
Copy link

txhno commented Nov 6, 2024

have this issue today with 16GB VRAM and 24GB RAM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants