Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Version: 4.49.0] Qwen2.5-VL is not supported in vLLM because of transformers #36292

Closed
4 tasks
usun1997 opened this issue Feb 20, 2025 · 16 comments
Closed
4 tasks
Labels

Comments

@usun1997
Copy link

System Info

error: Model architectures Qwen2 5 VLForConditionalGeneration' failed to be inspected. Please check the logs for more details.

Image

right now people say that they are using methods like:

  1. pip install --upgrade git+https://github.com/huggingface/transformers.git@336dc69d63d56f232a183a3e7f52790429b871ef ([Bug]: Qwen2.5-VL broke due to transformers upstream changes vllm-project/vllm#13285)
  2. pip install --force-reinstall git+https://github.com/huggingface/transformers.git@9985d06add07a4cc691dc54a7e34f54205c04d40 ([Bug] ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details. vllm-project/vllm#12932)
  3. There is a breaking change in transformers dev. You need to update vLLM to latest dev and also redownload the HF model repo. ([Bug] ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details. vllm-project/vllm#12932)

I mean. This is not supposed to be like this. I can't connect to your github without a vpn, and with a vpn, I can't connect to my workspace. Could transformers team just fix the problem instead of letting people solve it by some weird method. Thanks!

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

use Xinference, download the newest vLLM so that you get transformers 4.49.0. You download Qwen2.5 VL and deploy in vLLM and get the error:

Image

Expected behavior

NO MORE ERROR DURING DEPLOYMENT OF QWEN2.5 VL 7B IN VLLM

@usun1997 usun1997 added the bug label Feb 20, 2025
@glamourzc
Copy link

me too.

2 similar comments
@hayreenlee
Copy link

me too.

@yynickyeh
Copy link

me too.

@ppkliu
Copy link

ppkliu commented Feb 20, 2025

mee too

@Rocketknight1
Copy link
Member

cc @zucchini-nlp, but it would help a lot if someone could post the logs/error to help us figure out what's going on here!

@zucchini-nlp
Copy link
Member

Hey all! The issue is being fixed on vllm side with vllm-project/vllm#13592 afaik, the team will check compatibility with v4.49 release

cc @Isotr0py

@Isotr0py
Copy link
Collaborator

That's because the qwen2.5-vl implementation in vllm 0.7.2 (the latest release, not latest commit) is still trying import Qwen2_5_VLImageProcessor which has been removed in transformers 4.49.0 release.

vLLM team is planning to make a new release including the corresponding fix (vllm-project/vllm#13286), perhaps today or tomorrow.

@ywang96
Copy link

ywang96 commented Feb 20, 2025

Hello! We have released vllm==0.7.3 - Although vllm-project/vllm#13602 didn't make it to the release, we have confirmed that our docker image and PyPI package both have transformers==4.49.0 installed and should be compatible with it!

@igorpereirabr1
Copy link

Still not working after upgrate to vllm==0.7.3

Image

@ywang96
Copy link

ywang96 commented Feb 20, 2025

Still not working after upgrate to vllm==0.7.3

Image

Can you try specifying the huggingface ID (e.g, Qwen/Qwen2.5-VL-3B-Instruct) directly?

@igorpereirabr1
Copy link

igorpereirabr1 commented Feb 20, 2025

Still not working after upgrate to vllm==0.7.3

Can you try specifying the huggingface ID (e.g, Qwen/Qwen2.5-VL-3B-Instruct) directly?

Unfortunately, I can't.. my GPU cluster do not have access to huggingface.. but I've the latest version of the model stored in this path.. .

@usun1997
Copy link
Author

usun1997 commented Feb 21, 2025

Updates:

I managed to use the mentioned "weird" method to solve the case by using:

pip install --force-reinstall git+https://github.com/huggingface/transformers.git@9985d06add07a4cc691dc54a7e34f54205c04d40

Do you know why it can be solved? Transformers 4.49.0dev version is helpful. Here, the version 4.49.0 dev can support Qwen2.5-VL-7B, but "pip install transformers --upgrade" to get 4.49.0 version does not help.

@usun1997
Copy link
Author

Hey all! The issue is being fixed on vllm side with vllm-project/vllm#13592 afaik, the team will check compatibility with v4.49 release

cc @Isotr0py

GOOD NEWS

@gouqi666
Copy link
Contributor

@usun1997 , hello , it still doesn't work, could u share your version?
my library version:
transformers 4.49.0.dev0
transformers-stream-generator 0.0.5
triton 3.1.0
trl 0.15.1
typeguard 4.4.1
typer 0.15.1
typing_extensions 4.12.2
tyro 0.9.13
tzdata 2025.1
uc-micro-py 1.0.3
urllib3 2.3.0
uvicorn 0.34.0
uvloop 0.21.0
virtualenv 20.29.1
vllm 0.7.3
watchfiles 1.0.4
websockets 14.2
Werkzeug 3.1.3
wheel 0.45.1
wrapt 1.17.2
xattr 1.1.4
xformers 0.0.28.post3
xgrammar 0.1.11
xxhash 3.5.0
yarl 1.18.3
zipp 3.21.0
zstandard 0.23.0

@balachandarsv
Copy link

Check your config.json file, and change the processor name in preprocessor_config.json

"image_processor_type": "Qwen2VLImageProcessor",

This should work for most cases if you have latest transformers

@igorpereirabr1
Copy link

pip install --force-reinstall git+https://github.com/huggingface/transformers.git@9985d06add07a4cc691dc54a7e34f54205c04d40

still not working for me.. Same error: ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests