Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: How to run VLLM on multiple tpu hosts V4-32 #8582

Open
1 task done
sparsh35 opened this issue Sep 18, 2024 · 0 comments
Open
1 task done

[Usage]: How to run VLLM on multiple tpu hosts V4-32 #8582

sparsh35 opened this issue Sep 18, 2024 · 0 comments
Labels
usage How to use vllm

Comments

@sparsh35
Copy link

Your current environment

The output of `python collect_env.py`

How would you like to use vllm

As there is an example for offline inference on TPUs, but it is not utilizing all 4 hosts in v4-32, if I run the code on all hosts , ray detects each hosts TPU resource only, Environment is correct it works for single host but maybe I dont know how to let VLLM detect and use all 4 hosts , I would like to do that for bigger models.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@sparsh35 sparsh35 added the usage How to use vllm label Sep 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
usage How to use vllm
Projects
None yet
Development

No branches or pull requests

1 participant