You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The token counting is totally wrong. The first response should have completion_tokens=0, this is how it behaved in previous versions of vLLM. Works fine without chunked prefill.
Might be related to #8625 but looks slightly different? I will debug it now and take a look at that one too.
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
@njhill I saw you cleaned up this code recently. Did you happen to check the case with chunked prefill too? It looked like it was broken a couple of weeks ago.
@tdoublep I expect the recent changes I made fixed this issue, they included skipping streaming responses for intermediate prompt chunks. I will verify that when I get a chance!
Your current environment
The output of `python collect_env.py`
Model Input Dumps
No response
🐛 Describe the bug
Start the inference server with:
Then send a request with a long prompt for a single output token and enable streaming usage stats:
produces:
The token counting is totally wrong. The first response should have
completion_tokens=0
, this is how it behaved in previous versions of vLLM. Works fine without chunked prefill.Might be related to #8625 but looks slightly different? I will debug it now and take a look at that one too.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: