-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Learning rate finder crashes if accumulate_grad_batches is not set to 1 #1726
Comments
accumulate_grad_batches
is not set to 1
Just to be sure, is it an typing error that the trainer that gets initialized is called |
@SkafteNicki yeah, sorry, I just tried different trainers and copied the wrong one. |
This is very strange because the Just to be sure, do you want to accumulate gradients during the learning rate finder or is it just for later fitting? |
I want to accumulate batches in training, so I suppose I should set |
No, nothing wrong with your understanding of the code. I have found a solution to the problem and will create a PR soon. |
I'm having the same error. Any solutions ready to be pulled in? |
Just use the
[solution doesn't work] |
@jopo666 @florisdf I do not think that will solve the problem if the goal is to accumulate gradients during the Tested on a nightly from last week. |
I'm not sure if it is expected behavior or a bug, but when I'm trying to find a learning rate like this:
It throws an error
AttributeError: 'NoneType' object has no attribute 'item'
, which happens on the line 335 of lr_finder.py :current_loss = trainer.running_loss.last().item()
When I remove
accumulate_grad_batches=8
everything works as expectedIf it is expected behavior, I suggest implementing a more expressive error message
The text was updated successfully, but these errors were encountered: