Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A800 inference speed too slow #901

Open
6 tasks done
Tttobi4s opened this issue Feb 26, 2025 · 1 comment
Open
6 tasks done

A800 inference speed too slow #901

Tttobi4s opened this issue Feb 26, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@Tttobi4s
Copy link

Self Checks

  • This template is only for bug reports. For questions, please visit Discussions.
  • I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
  • I have searched for existing issues, including closed ones. Search issues
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • Please do not modify this template and fill in all required fields.

Cloud or Self Hosted

Cloud, Self Hosted (Source)

Environment Details

A800

Steps to Reproduce

I used the A800 to run run_webui.py in Speech-Fish and found that the token generation speed is very unstable(I have used the --compile parameter). Is this related to the type of Nvidia GPU? I noticed that it performs excellently on the 4090. Here is the log:

Image

✔️ Expected Behavior

No response

❌ Actual Behavior

No response

@Tttobi4s Tttobi4s added the bug Something isn't working label Feb 26, 2025
@PoTaTo-Mika
Copy link
Collaborator

No other similar issues about it, please check the GPU and your torch version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants