-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unnecessary computation in backward pass #2
Comments
it is still very slow when turnning this flag off, donot know the reason. |
I find that training this net on GPU for 1 iterations ,with batch size 64 on celeba , costs me nearly 30 seconds. |
...That's probably your setup problem. It should be on the order of 0.4 seconds on a good GPU. |
Hi, I use the default setup of the parameters. And my GPU has a memory of 12G. I donot know the reason why it is too slow.. |
Thank you for the tip! We were not aware of that. |
In the code of WassersteinGAN, they have this line:
I think it means that when you train G, by default you'll compute gradients for D as well (but not updating them), and vice versa. Setting the flag to False to avoid the computation should speed up the training a lot.
I found that my tensorflow implementation runs much faster than this code, and this is probably the reason.
The text was updated successfully, but these errors were encountered: