-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WandbLogger warning not logging logs. #2015
Labels
Comments
borisdayma
added a commit
to borisdayma/pytorch-lightning
that referenced
this issue
Jun 2, 2020
New training loops reset step to 0 which would previously try to overwrite logs fix Lightning-AI#2015
5 tasks
williamFalcon
pushed a commit
that referenced
this issue
Jun 2, 2020
* fix(wandb): use same logger on multiple training loops New training loops reset step to 0 which would previously try to overwrite logs fix #2015 * docs(changelog.md): add reference to PR 2055
justusschock
pushed a commit
that referenced
this issue
Jun 29, 2020
* fix(wandb): use same logger on multiple training loops New training loops reset step to 0 which would previously try to overwrite logs fix #2015 * docs(changelog.md): add reference to PR 2055
This was referenced Oct 15, 2020
8 tasks
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
🐛 Bug
WandbLogger giving warning:
WARNING Adding to old History rows isn't currently supported. Step 25 < 38
and not logging when I try to use the WandbLogger with k-fold cross-validation because there I am using the same instance ofwandb_logger
but usingtrainer.fit
multiple times for different train_dl and valid_dl. Since the step gets repeated in each case, it's not logging anything after the 1st fold is complete even though the log keys are completely different. It was working perfectly with pytorch-lightning v-0.7.4. For now, I have to create separate experiments for each fold which are hard to analyze on wandb.To Reproduce
Code sample
Colab Notebook
Expected behavior
It should log even when the global_step is repeated in case if the logs keys are different.
Environment
conda
,pip
, source): pipAdditional context
The text was updated successfully, but these errors were encountered: