-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best Practices: logger.experiment.add_image() at end of epoch when using new simplified pl.Train/EvalResult objects #2728
Comments
I guess one other detail I will add is that it's unclear which of these has control over logging 'avg_val_loss', given that this used to be a role of Using the new way outlined above, I'm still getting this error:
|
Looks like using |
It would be great an example possibly in Colab in which it is shown good practices logging the loss and other metrics as well as examples of samples (some images like in VAE, for instance) over |
I have this all set up for a VAE, but I want to make sure I'm doing best practices with the latest updates. Once we come to a consensus on this, I can provide a colab link. 😁 There's also the bolts repo, which we could update! |
I'm trying to log some figures once per epoch in I don't want to accumulate a list in the validation loop as I only want one set of images per epoch. Any advice? |
I have this same question... what are the best practices for logging images? my usual wandb.log seems to no longer work, as it is now wandbLogger. I read the Train/EvalResults page but the documentation seems sparse here. |
@joshclancy You should still be able to import |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
Maybe this should work in validation and testing: |
Question:
What are we going to consider best practice for visualizing images, embeddings, etc. to tensorboard when using pl.Train/EvalResult objects?
In light of #2651 and related PRs, what's the right way to do this?
Let's say we have a dataset of images, and we want to visualize a single batch of reconstructions once per epoch. I typically do this in
validation_epoch_end()
, usinglogger.experiment.add_image()
.Code:
Let's say my code now looks like this:
which works fine and is definitely much cleaner than the original method of returning multiple logging dicts. 😁
I now want to do something at the end of the validation loop so I specify:
outputs
in this case is a list of tuples where the first element is theEvalResult
for each val step, and the second element containsstep_dict
which includes all losses and reconstructedx_hat
s for each val step.Is there a better way? One potential downside to this is that outputs can eat up a significant chunk of memory if you're not careful.
What's your environment?
The text was updated successfully, but these errors were encountered: