-
-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC]: Reimplement and separate beam search on top of vLLM core #8306
Comments
This is a good goal to work toward, as ensuring that API interfaces (OpenAI, beam search, or otherwise) can efficiently and reliably schedule new sequences benefits all consumers.
The flood of vLLM notifications is hard to keep up with, so I may be out of date. My understanding was that prefix caching was not precise and was block based, resulting in some amount of excess computation. Is there an issue to allow APIs to specify the "prefix length" that should be cached? This new approach could see performance degrade when the sequence length approaches a multiple of the KV block length, if each arm of the beam search schedules a new sequence and must prefill O(kv_block_size) tokens plus decoding O(1) tokens. Ideally both would be O(1) with a hint to allow beam search to cache the entire prefix. |
we can set block size to 1 for the vLLM instance when we use beam search, then we don't have to waste any computation. |
There are also some alternative implementation of this by moving this functionality to a special class of Worker or Executor, which can be configured when beam search is turned on for any engine that needs it. |
@youkaichao How well does the KV cache handle a block size of 1, in terms of compute or memory overhead? |
@AaronFriel I don't think setting block size of 1 will affect performance a lot. But we need to test and measure the impact. |
@simon-mo can you explain more? What special functions / interfaces would these new Worker or Executor need? |
Motivation.
A rework of #6226
After discussing further with the community, we find that the common use case for beam search is:
After discussing with many contributors, we find:
because beam search is a search algorithm, it conflicts with all the rest sampling algorithm. As a result, many features in vllm already directly assert beam search is not used, e.g.
vllm/vllm/spec_decode/batch_expansion.py
Lines 303 to 305 in 6e36f4f
vllm/vllm/engine/output_processor/multi_step.py
Lines 100 to 103 in 6e36f4f
keeping beam-search as-is in the codebase, will not benefit current beam search user, as no optimization will target at better beam search performance. What's worse, very few developers understand beam search. Keeping beam-search as-is will not only increase the bugs for beam search as the codebase evolves, but also increase the maintenance cost of all contributors.
in search of a win-win solution, on behalf of the vllm team, I propose to separate and reimplement beam search on top of the vllm core code.
to be specific, we can:
LLM.beam_search
interface, that calls the engine to generate 1 tokens with logprobs every step, and maintain beam-search logic only in theLLM.beam_search
function.From the initial discussion, one concern is the efficiency of such implementation, as the request will come and go again and again from the vllm core's perspective. It should be solvable in two-folds:
vLLM is a community project, and we'd like to not only seek opinions from beam-search users, but also seek contributions from beam-search users. Your help is truly needed to shape the future of beam-search support in vLLM.
Proposed Change.
summary of the change: implement beam-search on top of vllm core and add wrappers for users. remove beam-search from the vllm core (scheduler).
Feedback Period.
1 week, from 9/9 to 9/15 (both inclusive)
CC List.
@hrsmanian @zhouyuan @lanking520 @nightflight-dk @HeegonJin @SemMulder @darabos @DhruvaBansal00 @tmostak @physicsrob @YooSungHyun @denadai2 @sjmielke @Reichenbachian @AaronFriel @hinnefe2 @mflaxman10
@WoosukKwon @zhuohan123 @simon-mo
Any Other Things.
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: