Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

logging loss bug when accumulate_grad_batches >= 1 #2569

Closed
YuxianMeng opened this issue Jul 9, 2020 · 1 comment
Closed

logging loss bug when accumulate_grad_batches >= 1 #2569

YuxianMeng opened this issue Jul 9, 2020 · 1 comment
Labels
bug Something isn't working help wanted Open to be worked on

Comments

@YuxianMeng
Copy link

YuxianMeng commented Jul 9, 2020

🐛 Bug

When accumulate_grad_batches >= 1, logging loss is divided by accumulate_grad_batches.
For example, when accumulate_grad_batches=1, logging loss is 4; when accumulate_grad_batches=2, loss is 2; accumulate_grad_batches=4, loss is 1

Expected behavior

logging loss should be similar no matter accumulate_grad_batch is any number.

Environment

  • PyTorch Version (e.g., 1.0): nightly
  • OS (e.g., Linux): Linux
  • How you installed PyTorch (conda, pip, source): pip
  • Build command you used (if compiling from source):
  • Python version: 3.6
  • CUDA/cuDNN version: 10.2
  • GPU models and configuration:
  • Any other relevant information:

Additional context

@awaelchli
Copy link
Contributor

Fixed in #2738

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Open to be worked on
Projects
None yet
Development

No branches or pull requests

2 participants