Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: grad can be implicitly created only for scalar outputs #331

Closed
gsx0015 opened this issue Oct 31, 2019 · 2 comments
Closed

Comments

@gsx0015
Copy link

gsx0015 commented Oct 31, 2019

When I use a single GPU, it works fine.
But with model = torch.nn.DataParallel(model, device_ids=[0, 1]), I get an error.
How to solve this problem?

---

i try "loss.backward(torch.Tensor([1, 1])) or loss.sum().backward().", but not work.

@shaoxiongji
Copy link

Hi, I have the same error. Have you solved this problem?

@Flova
Copy link
Collaborator

Flova commented Aug 3, 2021

Multiple GPUs are not supported at the moment. Thread for multi GPU training #520

@Flova Flova closed this as completed Aug 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants