Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MODEL REQUESTS #69

Open
robertgshaw2-neuralmagic opened this issue Aug 8, 2024 · 51 comments
Open

MODEL REQUESTS #69

robertgshaw2-neuralmagic opened this issue Aug 8, 2024 · 51 comments
Labels
enhancement New feature or request

Comments

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator

Please comment here any model requests for:

@robertgshaw2-neuralmagic robertgshaw2-neuralmagic added the enhancement New feature or request label Aug 8, 2024
@BlackSamorez
Copy link

A gemma-2-27b-it in 8 bits for both a100 and h100 would be nice.
I tried to produce them myself but the resulting checkpoints return NaNs when loaded into vLLM.

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

robertgshaw2-neuralmagic commented Aug 8, 2024

A gemma-2-27b-it in 8 bits for both a100 and h100 would be nice. I tried to produce them myself but the resulting checkpoints return NaNs when loaded into vLLM.

Thanks - looking for fp8 for H100 and int8 for A100?

@BlackSamorez
Copy link

A gemma-2-27b-it in 8 bits for both a100 and h100 would be nice. I tried to produce them myself but the resulting checkpoints return NaNs when loaded into vLLM.

Thanks - looking for fp8 for H100 and int8 for A100?

Exactly!

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

A gemma-2-27b-it in 8 bits for both a100 and h100 would be nice. I tried to produce them myself but the resulting checkpoints return NaNs when loaded into vLLM.

Thanks - looking for fp8 for H100 and int8 for A100?

Exactly!

Can you share more about the issue you were seeing?

@BlackSamorez
Copy link

I'm getting empty generations and unserializeable logits, indicating NaNs in model outputs.
I used practically the same recipe as in the Llama-3.1-70b-Instruct-FP8 quant

recipe = """
quant_stage:
    quant_modifiers:
        QuantizationModifier:
            ignore: ["lm_head"]
            config_groups:
                group_0:
                    weights:
                        num_bits: 8
                        type: float
                        strategy: tensor
                        dynamic: false
                        symmetric: true
                    input_activations:
                        num_bits: 8
                        type: float
                        strategy: tensor
                        dynamic: false
                        symmetric: true
                    targets: ["Linear"]
"""

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

I'm getting empty generations and unserializeable logits, indicating NaNs in model outputs. I used practically the same recipe as in the Llama-3.1-70b-Instruct-FP8 quant

recipe = """
quant_stage:
    quant_modifiers:
        QuantizationModifier:
            ignore: ["lm_head"]
            config_groups:
                group_0:
                    weights:
                        num_bits: 8
                        type: float
                        strategy: tensor
                        dynamic: false
                        symmetric: true
                    input_activations:
                        num_bits: 8
                        type: float
                        strategy: tensor
                        dynamic: false
                        symmetric: true
                    targets: ["Linear"]
"""

Could be a FlashInfer issue. Ill work on an example for you

@edgan8
Copy link

edgan8 commented Aug 8, 2024

Hi @robertgshaw2-neuralmagic , could we get an update to https://huggingface.co/neuralmagic/Mixtral-8x22B-Instruct-v0.1-FP8 ? The main model https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1 had its tokenizer updated recently and it would be great to incorporate these into the quantized model.

@Syst3m1cAn0maly
Copy link

Hi !
A phi-3-vision would be very nice in FP8 (ideally with k/v scales)
Thanks in advance !

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

Hi @robertgshaw2-neuralmagic , could we get an update to https://huggingface.co/neuralmagic/Mixtral-8x22B-Instruct-v0.1-FP8 ? The main model https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1 had its tokenizer updated recently and it would be great to incorporate these into the quantized model.

Absolutely @Lin-K76 - could you update this when you have a chance this week

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

Hi ! A phi-3-vision would be very nice in FP8 (ideally with k/v scales) Thanks in advance !

We can take a look at this, adding support for Vision models is on our roadmap but we need to try it out a bit more.

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

robertgshaw2-neuralmagic commented Aug 12, 2024

I'm getting empty generations and unserializeable logits, indicating NaNs in model outputs. I used practically the same recipe as in the Llama-3.1-70b-Instruct-FP8 quant

recipe = """
quant_stage:
    quant_modifiers:
        QuantizationModifier:
            ignore: ["lm_head"]
            config_groups:
                group_0:
                    weights:
                        num_bits: 8
                        type: float
                        strategy: tensor
                        dynamic: false
                        symmetric: true
                    input_activations:
                        num_bits: 8
                        type: float
                        strategy: tensor
                        dynamic: false
                        symmetric: true
                    targets: ["Linear"]
"""

@BlackSamorez - I made a couple examples with gemma2 for you (#78)

Note: gemma2 has been a bit unstable in vllm due to the soft capping on the logits. We are stabilizing this as part of the current release process.

Here's install instructions on the vllm side:

export VLLM_VERSION=0.5.4
pip install [https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-${VLLM_VERSION}-cp38-abi3-manylinux1_x86_64.whl](https://vllm-wheels.s3.us-west-2.amazonaws.com/nightly/vllm-$%7BVLLM_VERSION%7D-cp38-abi3-manylinux1_x86_64.whl)
pip install lm_eval==0.4.3
pip install https://github.com/flashinfer-ai/flashinfer/releases/download/v0.1.2/flashinfer-0.1.2+cu121torch2.4-cp310-cp310-linux_x86_64.whl

Eval fp16:

MODEL=google/gemma-2-27b-it
VLLM_ATTENTION_BACKEND=FLASHINFER lm_eval --model vllm --model_args pretrained=$MODEL,add_bos_token=true --tasks gsm8k --num_fewshot 5 --limit 250 --batch_size "auto"
vllm (pretrained=google/gemma-2-27b-it,add_bos_token=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.864|±  |0.0217|
|     |       |strict-match    |     5|exact_match||0.848|±  |0.0228|

Eval fp8 (made with the script):

MODEL=gemma-2-27b-it-FP8-Dynamic
VLLM_ATTENTION_BACKEND=FLASHINFER lm_eval --model vllm --model_args pretrained=$MODEL,add_bos_token=true --tasks gsm8k --num_fewshot 5 --limit 250 --batch_size "auto"
vllm (pretrained=gemma-2-27b-it-FP8-Dynamic,add_bos_token=true), gen_kwargs: (None), limit: 250.0, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value|   |Stderr|
|-----|------:|----------------|-----:|-----------|---|----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match||0.856|±  |0.0222|
|     |       |strict-match    |     5|exact_match||0.852|±  |0.0225|

The strict-match scores (the one that matters) is not impacted. This shows the fp8 quantization is working.

We will push a model up to the hub later this week once we have a chance to QA it.

@Lin-K76
Copy link
Collaborator

Lin-K76 commented Aug 12, 2024

Hi @robertgshaw2-neuralmagic , could we get an update to https://huggingface.co/neuralmagic/Mixtral-8x22B-Instruct-v0.1-FP8 ? The main model https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1 had its tokenizer updated recently and it would be great to incorporate these into the quantized model.

Hi, the new model is now live at https://huggingface.co/neuralmagic/Mixtral-8x22B-Instruct-v0.1-FP8.

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

Thanks @Lin-K76 !

@yzlnew
Copy link

yzlnew commented Aug 12, 2024

Qwen2 series in marlin24 format. I'm having trouble generating model (0.5B and 72B) with proper output, getting NaN logits. Config in #54.

Oneshot with 2:4 sparse or GPTQ alone is fine, but not both. Do I need to change my calibration dataset or GPTQ config?

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

Qwen2 series in marlin24 format. I'm having trouble generating model (0.5B and 72B) with proper output, getting NaN logits. Config in #54.

Oneshot with 2:4 sparse or GPTQ alone is fine, but not both. Do I need to change my calibration dataset or GPTQ config?

Thanks @yzlnew, I will take a look.

My suggestion though would be to use the W8A8 (int8 on ampere / fp8 on hopper) for production use cases as this will give you the best recovery and performance right now.

We are still working on making sparsity better. I will work on a demo for you later this week though :)

@supa-thibaud
Copy link

the Hermes 3 70b in int4 could be very great!

@halexan
Copy link

halexan commented Aug 22, 2024

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !

How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !

Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !

How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !

Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.

We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

@sigjhl
Copy link

sigjhl commented Aug 23, 2024

Hi, can I please ask for a gemma-2-27b-int8? It's a good fit for 48GB cards and I'd love to run it with vLLM. Many quantization methods seem broken for this model unfortunately... would really appreciate it!

@fengyang95
Copy link

the Hermes 3 70b in int4 could be very great!

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.

We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

DeepSeek-Coder-V2-Instruct in W4A16 would be great! Looking forward to your model release.

@fengyang95
Copy link

fengyang95 commented Aug 24, 2024

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.

We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2 to w4a16, but the following error occurred.
ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?

Also - note that quantization support for MoEs is still under construction in vllm.

@halexan
Copy link

halexan commented Aug 25, 2024

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.

We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

Do you mean this PR #7766 ? for W4A16 ? @robertgshaw2-neuralmagic

@fengyang95
Copy link

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?

Also - note that quantization support for MoEs is still under construction in vllm.

I see, I forgot to set trust_remote_code=True.

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

#7766

yes

@halexan
Copy link

halexan commented Aug 26, 2024

#7766

yes

So, What's the difference between #7415 and #7766?

@fengyang95
Copy link

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?

Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression".
image

@zxy1119
Copy link

zxy1119 commented Aug 26, 2024

I tried to quantize llama2-7b to w8a8 , but it‘s too slow. i want to konw the reason.

d48bacd89f572d0ba4af349cba0e9fce36cf8b10

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

I tried to quantize llama2-7b to w8a8 , but it‘s too slow. i want to konw the reason.

d48bacd89f572d0ba4af349cba0e9fce36cf8b10

Are you running on a CPU?

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

@fengyang95
Copy link

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

about 2T MEM and 640G GPU
Could you please tell me how to properly set the device_map parameter?

@dsikka
Copy link
Collaborator

dsikka commented Aug 26, 2024

#7766

yes

So, What's the difference between #7415 and #7766?

#7766 introduces a marlin kernel to support W4A16 MoE models
#7415 used triton kernels and targeted int8 models. It also introduced a quant config, experts_int8, to do so

@fengyang95
Copy link

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

I tried quantizing deepseek-coder-v2-instruct using 8 A100 80G GPUs. To avoid OOM, I set memory_limits to 35G.

When it reached the 32nd layer during quantization, the speed suddenly slowed down. I suspect that this portion of the parameters was loaded to the CPU, causing the slowdown. But why is it even slower than loading everything to the CPU?

image image

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

I tried quantizing deepseek-coder-v2-instruct using 8 A100 80G GPUs. To avoid OOM, I set memory_limits to 35G.

When it reached the 32nd layer during quantization, the speed suddenly slowed down. I suspect that this portion of the parameters was loaded to the CPU, causing the slowdown. But why is it even slower than loading everything to the CPU?

image image

Can you try this example here with sequential_update:

You'll need to install from source for this

@fengyang95
Copy link

fengyang95 commented Aug 29, 2024

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

I tried quantizing deepseek-coder-v2-instruct using 8 A100 80G GPUs. To avoid OOM, I set memory_limits to 35G.
When it reached the 32nd layer during quantization, the speed suddenly slowed down. I suspect that this portion of the parameters was loaded to the CPU, causing the slowdown. But why is it even slower than loading everything to the CPU?
image image

Can you try this example here with sequential_update:

You'll need to install from source for this
Yes, I used sequential_update=True. Here is my code.
If this is not set, it will use more GPU memory and cause OOM.

from llmcompressor.transformers import SparseAutoModelForCausalLM
from transformers import AutoTokenizer
import argparse
from typing import Dict, Union

import psutil
import torch
from accelerate import infer_auto_device_map, init_empty_weights
from transformers import AutoModelForCausalLM
import flash_attn

print(flash_attn.__version__)


def custom_offload_device_map(
        model_stub: str,
        max_memory_per_gpu: Union[str, int],
        max_memory_gpu0: Union[str, int],
        num_gpus: int = 1,
        offload_buffers:bool=False,
        **model_kwargs,
) -> Dict[Union[int, str], Union[int, str]]:
    """
    Calculates the optimal gpu mappings for model_stub stored as torch_dtype, where
    each GPU is restricted to allocating a specific amount of memory.

    :param model_stub: local path or HF stub to calculate mapping for
    :param max_memory_per_gpu: Max memory to allocate on each GPU, as either a string
        such as "10GB" or an integer number of bytes
    :param num_gpus: number of gpus to utilize
    :param model_kwargs: keyword arguments to pass to model initializer
    :return: memory mapping for layers of model_stub to be passed to from_pretrained()
    """
    max_cpu_memory = psutil.virtual_memory().available
    memory_limits = {device: max_memory_per_gpu for device in range(1, num_gpus)}
    memory_limits[0] = max_memory_gpu0
    memory_limits["cpu"] = max_cpu_memory

    with init_empty_weights():
        dummy_model = AutoModelForCausalLM.from_pretrained(model_stub, **model_kwargs)
        device_map = infer_auto_device_map(
            dummy_model,
            max_memory=memory_limits,
            no_split_module_classes=dummy_model._no_split_modules,
            offload_buffers=offload_buffers
        )
        del dummy_model

    return device_map


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--model-id", type=str, default=None)
    parser.add_argument("--dataset-dir", type=str, default=None)
    parser.add_argument("--save-dir", type=str, default=None)
    # parser.add_argument("", type=str, default="auto")
    parser.add_argument("--max-memory-per-gpu", type=str, default="35GB")
    parser.add_argument("--max-memory-gpu0", type=str, default="35GB")
    parser.add_argument("--device-map", type=str, default='auto')
    parser.add_argument("--num-samples", type=int, default=512)
    parser.add_argument("--offload-buffers",type=bool,default=False)
    args = parser.parse_args()

    from datasets import load_dataset

    NUM_CALIBRATION_SAMPLES = args.num_samples
    MAX_SEQUENCE_LENGTH = 2048

    # Load dataset.
    ds = load_dataset(args.dataset_dir, split="train_sft")
    ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))

    tokenizer = AutoTokenizer.from_pretrained(args.model_id, trust_remote_code=True)


    # Preprocess the data into the format the model is trained with.
    def preprocess(example):
        return {"text": tokenizer.apply_chat_template(example["messages"], tokenize=False, )}


    ds = ds.map(preprocess)


    # Tokenize the data (be careful with bos tokens - we need add_special_tokens=False since the chat_template already added it).
    def tokenize(sample):
        return tokenizer(sample["text"], padding=False, max_length=MAX_SEQUENCE_LENGTH, truncation=True,
                         add_special_tokens=False)


    ds = ds.map(tokenize, remove_columns=ds.column_names)

    from llmcompressor.transformers import oneshot
    from llmcompressor.modifiers.quantization import GPTQModifier

    # Configure the quantization algorithm to run.
    recipe = GPTQModifier(targets="Linear", scheme="W4A16", ignore=["lm_head"], sequential_update=True)
    num_gpus = 8

    if args.device_map == "cpu":
        device_map = "cpu"
    else:
        device_map = custom_offload_device_map(
            args.model_id, max_memory_per_gpu=args.max_memory_per_gpu, max_memory_gpu0=args.max_memory_gpu0,
            num_gpus=num_gpus, trust_remote_code=True, torch_dtype=torch.bfloat16,offload_buffers=args.offload_buffers

        )

    model = SparseAutoModelForCausalLM.from_pretrained(
        args.model_id, trust_remote_code=True, device_map=device_map, torch_dtype=torch.bfloat16,
    )
    # Apply quantization.
    oneshot(
        model=model, dataset=ds,
        recipe=recipe,
        max_seq_length=MAX_SEQUENCE_LENGTH,
        num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    )

    # Save to disk compressed.
    model.save_pretrained(args.save_dir, save_compressed=True)
    tokenizer.save_pretrained(args.save_dir)

@fengyang95
Copy link

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

I tried quantizing deepseek-coder-v2-instruct using 8 A100 80G GPUs. To avoid OOM, I set memory_limits to 35G.
When it reached the 32nd layer during quantization, the speed suddenly slowed down. I suspect that this portion of the parameters was loaded to the CPU, causing the slowdown. But why is it even slower than loading everything to the CPU?
image image

Can you try this example here with sequential_update:

You'll need to install from source for this

How should I load a w4a16 version of deepseek-v2 by vllm that was compressed using llm-compressor?
I used quantization=compressed-tensors, but it throws an error:

File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 192, in init
assert self.quant_method is not None

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

I tried quantizing deepseek-coder-v2-instruct using 8 A100 80G GPUs. To avoid OOM, I set memory_limits to 35G.
When it reached the 32nd layer during quantization, the speed suddenly slowed down. I suspect that this portion of the parameters was loaded to the CPU, causing the slowdown. But why is it even slower than loading everything to the CPU?
image image

Can you try this example here with sequential_update:

You'll need to install from source for this

How should I load a w4a16 version of deepseek-v2 by vllm that was compressed using llm-compressor? I used quantization=compressed-tensors, but it throws an error:

File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 192, in init assert self.quant_method is not None

Release v0.5.6 will support it. Need this PR: vllm-project/vllm#7766

@fengyang95
Copy link

neuralmagic/DeepSeek-Coder-V2-Instruct-FP8 is great !
How about DeepSeek-Coder-V2-Instruct in W8A8(INT8) ? I think DeepSeek-Coder-V2-Instruct-W8A8 could be great !
Or any instructions help me to quantinize DeepSeek-Coder-V2-Instruct to W8A8(INT8) ?

Hello! Currently in vllm, we only support FP8 inference for MoE models.
We are about to add support for W4A16 (PR is landing ideally today/tomorrow) and will follow up with W8A16. We currently do not have an active plan for W8A8, but can consider this on our roadmap.

I tried to quantize deepseek-coder-v2, but the following error occurred. ValueError: Unrecognized configuration class <class 'transformers_modules.deepseek_7b.configuration_deepseek.DeepseekV2Config'> for this kind of AutoModel: SparseAutoModelForCausalLM. Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, Gemma2Config, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, JetMoeConfig, LlamaConfig, MambaConfig, Mamba2Config, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, NemotronConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, Phi3Config, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

What is your transformers version?
Also - note that quantization support for MoEs is still under construction in vllm.

I tried to quantize deepseek-v2 to w4a16 (using A100 80G * 8, 1800G memory), but it suddenly gets killed when running to "INFO - Preparing model.layers.58 for compression". image

This usually means you’re running out of CPU memory. This is a big model … how much CPU RAM and GPU RAM do you have?

I tried quantizing deepseek-coder-v2-instruct using 8 A100 80G GPUs. To avoid OOM, I set memory_limits to 35G.
When it reached the 32nd layer during quantization, the speed suddenly slowed down. I suspect that this portion of the parameters was loaded to the CPU, causing the slowdown. But why is it even slower than loading everything to the CPU?
image image

Can you try this example here with sequential_update:

You'll need to install from source for this

How should I load a w4a16 version of deepseek-v2 by vllm that was compressed using llm-compressor? I used quantization=compressed-tensors, but it throws an error:
File "/usr/local/lib/python3.9/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 192, in init assert self.quant_method is not None

Release v0.5.6 will support it. Need this PR: vllm-project/vllm#7766

Is this PR still in progress? Do you have an estimated timeline?

@fengyang95
Copy link

fengyang95 commented Sep 10, 2024

@robertgshaw2-neuralmagic I use this framework with 512 data points to calibrate the quantized deepseek-v2.5 model. The output result is "!!". Are there any tricks for quantizing this model?
Here is my script:

from llmcompressor.transformers import SparseAutoModelForCausalLM
from transformers import AutoTokenizer
import argparse
from typing import Dict, Union
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
import psutil
import torch
from accelerate import infer_auto_device_map, init_empty_weights
from transformers import AutoModelForCausalLM
import flash_attn
from datasets import load_dataset

print(flash_attn.__version__)


def custom_offload_device_map(
        model_stub: str,
        max_memory_per_gpu: Union[str, int],
        max_memory_gpu0: Union[str, int],
        num_gpus: int = 1,
        offload_buffers: bool = False,
        **model_kwargs,
) -> Dict[Union[int, str], Union[int, str]]:
    """
    Calculates the optimal gpu mappings for model_stub stored as torch_dtype, where
    each GPU is restricted to allocating a specific amount of memory.

    :param model_stub: local path or HF stub to calculate mapping for
    :param max_memory_per_gpu: Max memory to allocate on each GPU, as either a string
        such as "10GB" or an integer number of bytes
    :param num_gpus: number of gpus to utilize
    :param model_kwargs: keyword arguments to pass to model initializer
    :return: memory mapping for layers of model_stub to be passed to from_pretrained()
    """
    max_cpu_memory = psutil.virtual_memory().available
    memory_limits = {device: max_memory_per_gpu for device in range(1, num_gpus)}
    memory_limits[0] = max_memory_gpu0
    memory_limits["cpu"] = max_cpu_memory

    with init_empty_weights():
        dummy_model = AutoModelForCausalLM.from_pretrained(model_stub, **model_kwargs)
        device_map = infer_auto_device_map(
            dummy_model,
            max_memory=memory_limits,
            no_split_module_classes=dummy_model._no_split_modules,
            offload_buffers=offload_buffers
        )
        del dummy_model

    return device_map


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--model-id", type=str, default="/opt/tiger/deepseek_http/models--deepseek-ai--DeepSeek-V2.5")
    parser.add_argument("--dataset-dir", type=str,
                        default="/opt/tiger/deepseek_http/datasets--HuggingFaceH4--ultrachat_200k")
    parser.add_argument("--max-memory-per-gpu", type=str, default="52GB")
    parser.add_argument("--max-memory-gpu0", type=str, default="52GB")
    parser.add_argument("--device-map", type=str, default='auto')
    parser.add_argument("--num-samples", type=int, default=512)
    parser.add_argument("--offload-buffers",  action='store_true')
    parser.add_argument("--max-model-len", type=int, default=8192)
    parser.add_argument("--sequential-update", action='store_true')
    parser.add_argument("--dataset-split", type=str, default='train_sft')
    args = parser.parse_args()

    # Select calibration dataset.
    DATASET_ID = args.dataset_dir
    DATASET_SPLIT = args.dataset_split

    MAX_SEQUENCE_LENGTH = args.max_model_len
    NUM_CALIBRATION_SAMPLES = args.num_samples
    # Load dataset and preprocess.
    ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
    ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))

    tokenizer = AutoTokenizer.from_pretrained(args.model_id)


    def preprocess(example):
        if 'messages' in example:
            messages = example['messages']
        elif 'input' in example and 'output' in example:
            messages = [
                {
                    "role": "user",
                    "content": example['input']
                },
                {
                    "role": "assistant",
                    "content": example['output']
                }
            ]
        else:
            raise ValueError("in valid example")
        return {
            "text": tokenizer.apply_chat_template(
                messages,
                tokenize=False,
            )
        }


    ds = ds.map(preprocess)


    # Tokenize inputs.
    def tokenize(sample):
        return tokenizer(
            sample["text"],
            padding=False,
            max_length=MAX_SEQUENCE_LENGTH,
            truncation=True,
            add_special_tokens=False,
        )


    ds = ds.map(tokenize, remove_columns=ds.column_names)

    # define a llmcompressor recipe for W8A8 quantization
    recipe = GPTQModifier(
        targets="Linear", scheme="W4A16", ignore=["lm_head"], sequential_update=args.sequential_update
    )

    if args.device_map == "cpu":
        model = SparseAutoModelForCausalLM.from_pretrained(
            args.model_id, device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True
        )
    else:
        device_map = custom_offload_device_map(
            model_stub=args.model_id,
            max_memory_per_gpu=args.max_memory_per_gpu,
            max_memory_gpu0=args.max_memory_gpu0,
            num_gpus=8,
            offload_buffers=args.offload_buffers,
            trust_remote_code=True
        )
        model = SparseAutoModelForCausalLM.from_pretrained(
            args.model_id, device_map=device_map, torch_dtype=torch.bfloat16, trust_remote_code=True
        )

    SAVE_DIR = args.model_id + '-W4A16'

    oneshot(
        model=model, dataset=ds,
        recipe=recipe,
        max_seq_length=MAX_SEQUENCE_LENGTH,
        num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    )

    # Save to disk compressed.
    model.save_pretrained(SAVE_DIR, save_compressed=True,
                          skip_compression_stats=True)
    tokenizer.save_pretrained(SAVE_DIR)

@robertgshaw2-neuralmagic
Copy link
Sponsor Collaborator Author

@robertgshaw2-neuralmagic I use this framework with 512 data points to calibrate the quantized deepseek-v2.5 model. The output result is "!!". Are there any tricks for quantizing this model? Here is my script:

from llmcompressor.transformers import SparseAutoModelForCausalLM
from transformers import AutoTokenizer
import argparse
from typing import Dict, Union
from llmcompressor.transformers import oneshot
from llmcompressor.modifiers.quantization import GPTQModifier
import psutil
import torch
from accelerate import infer_auto_device_map, init_empty_weights
from transformers import AutoModelForCausalLM
import flash_attn
from datasets import load_dataset

print(flash_attn.__version__)


def custom_offload_device_map(
        model_stub: str,
        max_memory_per_gpu: Union[str, int],
        max_memory_gpu0: Union[str, int],
        num_gpus: int = 1,
        offload_buffers: bool = False,
        **model_kwargs,
) -> Dict[Union[int, str], Union[int, str]]:
    """
    Calculates the optimal gpu mappings for model_stub stored as torch_dtype, where
    each GPU is restricted to allocating a specific amount of memory.

    :param model_stub: local path or HF stub to calculate mapping for
    :param max_memory_per_gpu: Max memory to allocate on each GPU, as either a string
        such as "10GB" or an integer number of bytes
    :param num_gpus: number of gpus to utilize
    :param model_kwargs: keyword arguments to pass to model initializer
    :return: memory mapping for layers of model_stub to be passed to from_pretrained()
    """
    max_cpu_memory = psutil.virtual_memory().available
    memory_limits = {device: max_memory_per_gpu for device in range(1, num_gpus)}
    memory_limits[0] = max_memory_gpu0
    memory_limits["cpu"] = max_cpu_memory

    with init_empty_weights():
        dummy_model = AutoModelForCausalLM.from_pretrained(model_stub, **model_kwargs)
        device_map = infer_auto_device_map(
            dummy_model,
            max_memory=memory_limits,
            no_split_module_classes=dummy_model._no_split_modules,
            offload_buffers=offload_buffers
        )
        del dummy_model

    return device_map


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("--model-id", type=str, default="/opt/tiger/deepseek_http/models--deepseek-ai--DeepSeek-V2.5")
    parser.add_argument("--dataset-dir", type=str,
                        default="/opt/tiger/deepseek_http/datasets--HuggingFaceH4--ultrachat_200k")
    parser.add_argument("--max-memory-per-gpu", type=str, default="52GB")
    parser.add_argument("--max-memory-gpu0", type=str, default="52GB")
    parser.add_argument("--device-map", type=str, default='auto')
    parser.add_argument("--num-samples", type=int, default=512)
    parser.add_argument("--offload-buffers",  action='store_true')
    parser.add_argument("--max-model-len", type=int, default=8192)
    parser.add_argument("--sequential-update", action='store_true')
    parser.add_argument("--dataset-split", type=str, default='train_sft')
    args = parser.parse_args()

    # Select calibration dataset.
    DATASET_ID = args.dataset_dir
    DATASET_SPLIT = args.dataset_split

    MAX_SEQUENCE_LENGTH = args.max_model_len
    NUM_CALIBRATION_SAMPLES = args.num_samples
    # Load dataset and preprocess.
    ds = load_dataset(DATASET_ID, split=DATASET_SPLIT)
    ds = ds.shuffle(seed=42).select(range(NUM_CALIBRATION_SAMPLES))

    tokenizer = AutoTokenizer.from_pretrained(args.model_id)


    def preprocess(example):
        if 'messages' in example:
            messages = example['messages']
        elif 'input' in example and 'output' in example:
            messages = [
                {
                    "role": "user",
                    "content": example['input']
                },
                {
                    "role": "assistant",
                    "content": example['output']
                }
            ]
        else:
            raise ValueError("in valid example")
        return {
            "text": tokenizer.apply_chat_template(
                messages,
                tokenize=False,
            )
        }


    ds = ds.map(preprocess)


    # Tokenize inputs.
    def tokenize(sample):
        return tokenizer(
            sample["text"],
            padding=False,
            max_length=MAX_SEQUENCE_LENGTH,
            truncation=True,
            add_special_tokens=False,
        )


    ds = ds.map(tokenize, remove_columns=ds.column_names)

    # define a llmcompressor recipe for W8A8 quantization
    recipe = GPTQModifier(
        targets="Linear", scheme="W4A16", ignore=["lm_head"], sequential_update=args.sequential_update
    )

    if args.device_map == "cpu":
        model = SparseAutoModelForCausalLM.from_pretrained(
            args.model_id, device_map="cpu", torch_dtype=torch.bfloat16, trust_remote_code=True
        )
    else:
        device_map = custom_offload_device_map(
            model_stub=args.model_id,
            max_memory_per_gpu=args.max_memory_per_gpu,
            max_memory_gpu0=args.max_memory_gpu0,
            num_gpus=8,
            offload_buffers=args.offload_buffers,
            trust_remote_code=True
        )
        model = SparseAutoModelForCausalLM.from_pretrained(
            args.model_id, device_map=device_map, torch_dtype=torch.bfloat16, trust_remote_code=True
        )

    SAVE_DIR = args.model_id + '-W4A16'

    oneshot(
        model=model, dataset=ds,
        recipe=recipe,
        max_seq_length=MAX_SEQUENCE_LENGTH,
        num_calibration_samples=NUM_CALIBRATION_SAMPLES,
    )

    # Save to disk compressed.
    model.save_pretrained(SAVE_DIR, save_compressed=True,
                          skip_compression_stats=True)
    tokenizer.save_pretrained(SAVE_DIR)

Thanks @fengyang95 - @dsikka is looking into this

@dsikka
Copy link
Collaborator

dsikka commented Sep 10, 2024

Hey @fengyang95 - investigating this issue. Will update once fixed.
Thanks!

@dsikka
Copy link
Collaborator

dsikka commented Sep 12, 2024

Hi @fengyang95 - can you share the code you're using which generates !!!?

We have also added this example which you can follow:
https://github.com/vllm-project/llm-compressor/blob/main/examples/quantizing_moe/deepseek_moe_w8a8.py
You can swap the model to the lager model and the scheme to W4A16.

You'll need to use the latest main to pull in a fix that was needed for deepseek_v2

@fengyang95
Copy link

pull in a fix that was needed for deepseek_v

python3 -m vllm.entrypoints.openai.api_server --model DeepSeek-V2.5-W4A16 ---served-model-name dsv2 --trust-remote-code --tensor-parallel-size 8 --max-model-len 16384 --port $PORT0 --gpu-memory-utilization 0.9 --quantization compressed-tensors --force-eager

@fengyang95
Copy link

python3 -m vllm.entrypoints.openai.api_server --model DeepSeek-V2.5-W4A16 ---served-model-name dsv2 --trust-remote-code --tensor-parallel-size 8 --max-model-len 16384 --port $PORT0 --gpu-memory-utilization 0.9 --quantization compressed-tensors --force-eager

Hi @fengyang95 - can you share the code you're using which generates !!!?

We have also added this example which you can follow: https://github.com/vllm-project/llm-compressor/blob/main/examples/quantizing_moe/deepseek_moe_w8a8.py You can swap the model to the lager model and the scheme to W4A16.

You'll need to use the latest main to pull in a fix that was needed for deepseek_v2

Thank you, I'll try it right away.

@fengyang95
Copy link

fengyang95 commented Sep 13, 2024

Hi @fengyang95 - can you share the code you're using which generates !!!?

We have also added this example which you can follow: https://github.com/vllm-project/llm-compressor/blob/main/examples/quantizing_moe/deepseek_moe_w8a8.py You can swap the model to the lager model and the scheme to W4A16.

You'll need to use the latest main to pull in a fix that was needed for deepseek_v2

Hi @dsikka , I followed your suggestion to ignore the gate parameter and updated the code. However, the quantized model still outputs "!!!". Have you tested this on DeepSeek-v2.5?

@dsikka
Copy link
Collaborator

dsikka commented Sep 13, 2024

Hi @fengyang95 there was a bug in vLLM which has now been fixed on main. Do you mind trying it again?
We have also added a W4A16 end-to-end example: https://github.com/vllm-project/llm-compressor/blob/main/examples/quantizing_moe/deepseek_moe_w4a16.py

@fengyang95
Copy link

Hi @fengyang95 there was a bug in vLLM which has now been fixed on main. Do you mind trying it again? We have also added a W4A16 end-to-end example: https://github.com/vllm-project/llm-compressor/blob/main/examples/quantizing_moe/deepseek_moe_w4a16.py

I'll try it asap

@TheAhmadOsman
Copy link

@dsikka

I am getting the following error while trying to run https://huggingface.co/nm-testing/DeepSeek-V2.5-W4A16

Process SpawnProcess-1:
Traceback (most recent call last):
  File "/vllm/vllm/worker/model_runner_base.py", line 112, in _wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/model_runner.py", line 1547, in execute_model
    hidden_or_intermediate_states = model_executable(
                                    ^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 504, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 461, in forward
    hidden_states, residual = layer(positions, hidden_states,
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward
    hidden_states = self.mlp(hidden_states)
                    ^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 148, in forward
    final_hidden_states = self.experts(
                          ^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/layers/fused_moe/layer.py", line 469, in forward
    final_hidden_states = self.quant_method.apply(
                          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py", line 285, in apply
    return fused_marlin_moe(
           ^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/layers/fused_moe/fused_marlin_moe.py", line 150, in fused_marlin_moe
    assert hidden_states.dtype == torch.float16
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/vllm/vllm/entrypoints/openai/rpc/server.py", line 242, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/engine/async_llm_engine.py", line 576, in from_engine_args
    engine = cls(
             ^^^^
  File "/vllm/vllm/engine/async_llm_engine.py", line 471, in __init__
    self.engine = self._engine_class(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/engine/async_llm_engine.py", line 260, in __init__
    super().__init__(*args, **kwargs)
  File "/vllm/vllm/engine/llm_engine.py", line 331, in __init__
    self._initialize_kv_caches()
  File "/vllm/vllm/engine/llm_engine.py", line 465, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/executor/distributed_gpu_executor.py", line 39, in determine_num_available_blocks
    num_blocks = self._run_workers("determine_num_available_blocks", )
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/executor/multiproc_gpu_executor.py", line 185, in _run_workers
    driver_worker_output = driver_worker_method(*args, **kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/worker.py", line 223, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/vllm/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/model_runner.py", line 1219, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/vllm/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/model_runner_base.py", line 126, in _wrapper
    raise type(err)(
AssertionError: Error in model execution (input dumped to /tmp/err_execute_model_input_20240917-022954.pkl): 
ERROR 09-17 02:30:01 api_server.py:203] RPCServer process died before responding to readiness probe
/usr/lib/python3.11/multiprocessing/resource_tracker.py:254: UserWarning: resource_tracker: There appear to be 1 leaked shared_memory objects to clean up at shutdown                                                                                        
  warnings.warn('resource_tracker: There appear to be %d '

The command I ran:
vllm serve nm-testing/DeepSeek-V2.5-W4A16 --tensor-parallel-size 8 --gpu-memory-utilization 0.96 --max-model-len 131072 --dtype auto --quantization compressed-tensors --trust-remote-code

@dsikka
Copy link
Collaborator

dsikka commented Sep 17, 2024

Hi @TheAhmadOsman - the current kernel supports float16. Could you pass that in for dtype?

@TheAhmadOsman
Copy link

@dsikka running vllm serve nm-testing/DeepSeek-V2.5-W4A16 --tensor-parallel-size 8 --gpu-memory-utilization 0.96 --max-model-len 131072 --dtype float16 --quantization compressed-tensors --trust-remote-code I get the following error:

Process SpawnProcess-1:
Traceback (most recent call last):
  File "/vllm/vllm/worker/model_runner_base.py", line 116, in _wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/model_runner.py", line 1590, in execute_model
    hidden_or_intermediate_states = model_executable(
                                    ^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 504, in forward
    hidden_states = self.model(input_ids, positions, kv_caches,
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 461, in forward
    hidden_states, residual = layer(positions, hidden_states,
                              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 401, in forward
    hidden_states = self.mlp(hidden_states)
                    ^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/models/deepseek_v2.py", line 148, in forward
    final_hidden_states = self.experts(
                          ^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/layers/fused_moe/layer.py", line 469, in forward
    final_hidden_states = self.quant_method.apply(
                          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/layers/quantization/compressed_tensors/compressed_tensors_moe.py", line 285, in apply
    return fused_marlin_moe(
           ^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/layers/fused_moe/fused_marlin_moe.py", line 171, in fused_marlin_moe
    sorted_token_ids, _, _ = moe_align_block_size(topk_ids, block_size_m, E)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/model_executor/layers/fused_moe/fused_moe.py", line 228, in moe_align_block_size
    ops.moe_align_block_size(topk_ids, num_experts, block_size, sorted_ids,
  File "/vllm/vllm/_custom_ops.py", line 32, in wrapper
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/_custom_ops.py", line 800, in moe_align_block_size
    torch.ops._C.moe_align_block_size(topk_ids, num_experts, block_size,
  File "/vllm/venv/lib/python3.11/site-packages/torch/_ops.py", line 1061, in __call__
    return self_._op(*args, **(kwargs or {}))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.11/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/vllm/vllm/entrypoints/openai/rpc/server.py", line 242, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/engine/async_llm_engine.py", line 576, in from_engine_args
    engine = cls(
             ^^^^
  File "/vllm/vllm/engine/async_llm_engine.py", line 471, in __init__
    self.engine = self._engine_class(*args, **kwargs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/engine/async_llm_engine.py", line 260, in __init__
    super().__init__(*args, **kwargs)
  File "/vllm/vllm/engine/llm_engine.py", line 331, in __init__
    self._initialize_kv_caches()
  File "/vllm/vllm/engine/llm_engine.py", line 465, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/executor/distributed_gpu_executor.py", line 39, in determine_num_available_blocks
    num_blocks = self._run_workers("determine_num_available_blocks", )
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/executor/multiproc_gpu_executor.py", line 185, in _run_workers
    driver_worker_output = driver_worker_method(*args, **kwargs)
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/worker.py", line 223, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/vllm/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/model_runner.py", line 1236, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/vllm/venv/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/vllm/vllm/worker/model_runner_base.py", line 144, in _wrapper
    raise type(err)(
RuntimeError: Error in model execution (input dumped to /tmp/err_execute_model_input_20240917-230234.pkl): CUDA error: invalid argument
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.```

@TheAhmadOsman
Copy link

@dsikka I just noticed that config.json file on the model's page has "torch_dtype": "bfloat16", unlike other models you uploaded which have it as float16. Also, I believe the end-to-end example you posted earlier also has dtype assigned as bfloat16.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests