-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sacred logger #656
Sacred logger #656
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nice work, just add documentation and tests
@Borda It's just a draft pull request, I am aware of the things that need to be done ... |
If not planned already in this PR, it would be nice for lightning to intercept the Sacred observer created by command line arguments as in
Not sure how this would work though |
@CarloLucibello I can think about it but I have only used MongoObserver so far. I also don't really know how the options |
EDIT: I think I figured it out, should be easy to use hyperparams like in pytorch lightning and store them in a sacred way 🥳 I am currently a bit unsure how to implement
Maybe I am just making things too complicated but I'd appreciate it if someone had an idea 😅 I found some resources that might go in the right direction (maybe we can use lightning's hparams and pass them to |
Alright, after I finish implementing the test cases, I think this is ready for review. I have still a few questions left, maybe someone can answer them in the meantime:
|
Ah, I am very sorry! Must have made some mistake, because I was actually searching for an issue before commenting, but I couldn't find it. |
This pull request is ready imo, but Travis disappeared? I'd like to see if the tests run here before making it "Ready for review". |
@expectopatronum can you move it from Draft to normal PR status so we can see CI build? |
@expectopatronum awesome work! @Borda is this ready? |
@williamFalcon Last time I checked the two tests were failing but I couldn't figure out (didn't have time) why. But it's probably a problem with my tests and not the code itself, because I am using it in my project and everything I expect, works. I don't see Travis now anymore, so I don't know what the status of the tests is. |
@williamFalcon rebase is needed... and it seems that our master is failing so I would fix the master first... |
Hi, My test case is working now locally, but Travis still fails with an error related to the TensorboardLogger. I previously implemented two test functions:
The reason for removing it is that I saw that the wandb logger only tests very, very basic stuff. Maybe you can clarify what functionality needs to be working & tested, since not all loggers seem to be tested the same way. I am sorry for taking so long to finish this pull request but it's the first time I am working with Travis and at first I wasn't able to reproduce the tests locally (many more were failing bc. Best regards |
@expectopatronum we have fixed master, could you rebase it, pls |
- Update __init__ interface - Implement run_id, name, version
- make SacredLogger importable
abbe2ac
to
4974847
Compare
@Borda I did a rebase (took me some time to figure out how that works). Is there a reason why you suggest a rebase rather than merging the changes? (from what I read while figuring out how rebase works, I got the impression merging would make more sense since I already pushed my commits to the remote). Now it fails with the error ERROR: torchvision 0.4.2 has requirement torch==1.3.1, but you'll have torch 1.4.0 which is incompatible. I guess this is something that has to be fixed in your master? |
If you do just merge, sometimes the PR diff shows master comits as part of this PR so it is not clear what is the new addition... |
Thanks for the explanation, makes sense :) |
@expectopatronum rebase master or add sacred to tests/requirements.txt? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test the logger
@@ -0,0 +1,78 @@ | |||
""" | |||
Log using `sacred <https://sacred.readthedocs.io/en/stable/index.html>'_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pls check the dos visage
def __init__(self, sacred_experiment): | ||
"""Initialize a sacred logger. | ||
|
||
:param sacred.experiment.Experiment sacred_experiment: Required. Experiment object with desired observers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use the updated docs formating
:param sacred_experiment (sacred.experiment.Experiment): ...
"""Verify that basic functionality of sacred logger works.""" | ||
tutils.reset_seed() | ||
|
||
try: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should not be here, the logger shall be tested
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tests shall not be tolerant of the missing package...
return self._run_id | ||
|
||
@rank_zero_only | ||
def log_hyperparams(self, params): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wanted to share my workflow for your consideration. I use Sacred for general configuration, then Ax for hyperparameter optimization. There is one Sacred experiment and one logger during the whole process, but trainer.fit() is called many times by Ax with different hyperparameters. Logging these in log_hyperparams somehow may be beneficial.
by #767 we will do all logger imports at the beginning of the test suit... |
@expectopatronum hi! mind rebasing and getting the tests to pass? otherwise we should close this PR until it's ready to be worked on |
Hi, I am sorry, I don't have the time right now and I'm also not using pytorch lightning for my experiments anymore. I hoped to finish this two weeks ago but there are too many changes for me to keep track. |
Is there still a chance if this PR making it through? |
mind adding it to bolts? |
Before submitting
What does this PR do?
Fixes #438.
Did you have fun?
Yes, and I learned a lot :)