-
Notifications
You must be signed in to change notification settings - Fork 28.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Version: 4.49.0] Qwen2.5-VL is not supported in vLLM because of transformers #36292
Comments
me too. |
2 similar comments
me too. |
me too. |
mee too |
cc @zucchini-nlp, but it would help a lot if someone could post the logs/error to help us figure out what's going on here! |
Hey all! The issue is being fixed on vllm side with vllm-project/vllm#13592 afaik, the team will check compatibility with v4.49 release cc @Isotr0py |
That's because the qwen2.5-vl implementation in vllm 0.7.2 (the latest release, not latest commit) is still trying import vLLM team is planning to make a new release including the corresponding fix (vllm-project/vllm#13286), perhaps today or tomorrow. |
Hello! We have released |
Unfortunately, I can't.. my GPU cluster do not have access to huggingface.. but I've the latest version of the model stored in this path.. . |
Updates: I managed to use the mentioned "weird" method to solve the case by using: pip install --force-reinstall git+https://github.com/huggingface/transformers.git@9985d06add07a4cc691dc54a7e34f54205c04d40 Do you know why it can be solved? Transformers 4.49.0dev version is helpful. Here, the version 4.49.0 dev can support Qwen2.5-VL-7B, but "pip install transformers --upgrade" to get 4.49.0 version does not help. |
GOOD NEWS |
@usun1997 , hello , it still doesn't work, could u share your version? |
Check your config.json file, and change the processor name in preprocessor_config.json "image_processor_type": "Qwen2VLImageProcessor", This should work for most cases if you have latest transformers |
still not working for me.. Same error: ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details. |
System Info
error: Model architectures Qwen2 5 VLForConditionalGeneration' failed to be inspected. Please check the logs for more details.
right now people say that they are using methods like:
I mean. This is not supposed to be like this. I can't connect to your github without a vpn, and with a vpn, I can't connect to my workspace. Could transformers team just fix the problem instead of letting people solve it by some weird method. Thanks!
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
use Xinference, download the newest vLLM so that you get transformers 4.49.0. You download Qwen2.5 VL and deploy in vLLM and get the error:
Expected behavior
NO MORE ERROR DURING DEPLOYMENT OF QWEN2.5 VL 7B IN VLLM
The text was updated successfully, but these errors were encountered: