Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while deserializing header: MetadataIncompleteBuffer #35

Open
blacksunfm opened this issue Nov 17, 2024 · 1 comment
Open

Error while deserializing header: MetadataIncompleteBuffer #35

blacksunfm opened this issue Nov 17, 2024 · 1 comment

Comments

@blacksunfm
Copy link

I am trying to evaluate llm4decompile-6.7b-v1.5 using the methods you provided. The model weights were downloaded from the Hugging Face repository of the same name. However, I keep encountering an error indicating that the weight files are incorrect. Below is the error message:

(llm4decompile) root@autodl-container-b52c468700-a1cda26e:~/LLM4Decompile# python ./evaluation/run_evaluation_llm4decompile_singleGPU.py
Traceback (most recent call last):
File "/root/LLM4Decompile/./evaluation/run_evaluation_llm4decompile_singleGPU.py", line 75, in
model = AutoModelForCausalLM.from_pretrained(args.model_path,torch_dtype=torch.bfloat16).cuda()
File "/root/miniconda3/envs/llm4decompile/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained
return model_class.from_pretrained(
File "/root/miniconda3/envs/llm4decompile/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3994, in from_pretrained
with safe_open(resolved_archive_file, framework="pt") as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer

Could you help me understand why this error occurs and how to fix it? Thank you!

@albertan017
Copy link
Owner

Please use the vllm script

Other scripts have not been updated.

Regarding your error, I believe it is associated with the environment rather than the model. You might need to verify the version of the transformers and consider setting trust_remote_code=True.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants