-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Int num_sanity_val_steps is always replaced by float limit_val_batches #2882
Comments
Hi! thanks for your contribution!, great first issue! |
@williamFalcon @Borda should num_sanity_val_steps be limited by limit_val_batches? My understanding is they can be independent of one another. |
@ananyahjha93 I asked william some time ago: #2246 (comment) |
@awaelchli Here: #2246 (comment) |
@awaelchli any comments on above? We can fix it then :) |
It depends. What does it currently do in your example? |
@awaelchli In the #2246 (comment) you said it should run for 5 val_steps but here https://github.com/PyTorchLightning/pytorch-lightning/blob/0097630a95bddc48d6fb5d3b9a58aef2e8e89b22/pytorch_lightning/trainer/trainer.py#L463-L466 it seems that it's running for 3 val_steps. I suggest |
num_sanity_val_steps should be int or -1. this is really meant to be a sanity check... floats will break this. You need this min because if your dataset is smaller than the limit_val_batches it will crash |
The type annotations of
num_sanity_val_steps:
andlimit_val_batches
areint
andUnion[int, float]
for percent ornum_batches
. The minimum of percent andnum_batches
will be percent. (except whennum_batches==0
)https://github.com/PyTorchLightning/pytorch-lightning/blob/a59e140ee814d8818d121405582133cf6b767e1a/pytorch_lightning/trainer/trainer.py#L461
Maybe we can remove the dependency of
limit_val_batches
fromnum_sanity_val_steps
and revert tohttps://github.com/PyTorchLightning/pytorch-lightning/blob/1e68968ed7fb9b8f73df148dd48194d469655ea3/pytorch_lightning/trainer/trainer.py#L491
The text was updated successfully, but these errors were encountered: