-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Lightning logger aggregation API deprecation #9145
Comments
Thanks for filing this @edward-io ! I believe this is very related to #8991 and #9004 I agree that we can simplify the base interface for logging metrics. At the very least I see:
Do we need a |
where would you move it?
sounds good as long, as we keep exactly the same logging behavior
what we know as flush in the logger connector is just the save() function in the loggers. |
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team! |
We use some metrics as counter which keep accumulating new logged values. Without
|
@wilson100hong can you put your custom logic into |
I did a test in https://colab.research.google.com/drive/1SSOOmeJJf8eU4HE4UkPgd2GbtpfRPQa_?usp=sharing and it seems to me that the aggregation logic in Thus I believe we can move forward with the deprecation of with |
Why would they be averaged? The example only makes sense when the keys you log are different. agg_and_log_metrics("key1", 1.0) # this adds the key1, does not write out the logs
agg_and_log_metrics("key2", 2.0) # this adds the key2, does not write out the logs
agg_and_log_metrics("key3", 3.0) # this adds the key3, does not write out the logs
# this will make the actual call to self.log_metrics
# and this will log a single dict with all keys aggregated: {"key1": 1.0, "key2": 2.0, "key3": 3.0}
save() Let's please not remove this feature under the wrong understanding. |
@awaelchli I am confused by your last comment :/ I was under the impression that aggregation was done per key? |
They would be averaged because that is the default aggregation function: https://github.com/PyTorchLightning/pytorch-lightning/blob/f79d75d4b5e50eb3e37073606eba5ca10d48c99a/pytorch_lightning/loggers/base.py#L66 Also, in my previous comment where I said "I am logging the value 2.0 on even steps and 3.0 on odd steps" instead of step I meant |
Closing this issue as all PRs have landed! |
Proposed refactoring or deprecation
Move and/or deprecate aggregation-related code to individual loggers
Motivation
We are auditing the Lightning components and APIs to assess opportunities for improvements:
https://docs.google.com/document/d/1xHU7-iQSpp9KJTjI3As2EM0mfNHHr37WZYpDpwLkivA/edit#
Review Lightning architecture & API #7740
Revisiting some of the API decisions regarding aggregations in Lightning Logger to simplify the interface and move logic specific to individual loggers away from the base class:
agg_and_log_metrics
orlog_metrics
. In LoggerConnector, whenlog_metrics
is called, we callagg_and_log_metrics
.LightningLoggerBase.__init__
accepts two arguments,agg_key_funcs
, andagg_default_func
. They aren't being called in any sub-classed loggers within Lightning. They can be implementation details of the loggers instead.update_agg_funcs
directly?Pitch
LightningLoggerBase.__init__
by removingagg_key_funcs
andagg_default_func
log_metrics
andagg_and_log_metrics
. Proposal: the trainer should only calllog_metrics
Additional context
Related issues:
#8991
#9004
If you enjoy Lightning, check out our other projects! ⚡
Metrics: Machine learning metrics for distributed, scalable PyTorch applications.
Flash: The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning
Bolts: Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch
Lightning Transformers: Flexible interface for high performance research using SOTA Transformers leveraging Pytorch Lightning, Transformers, and Hydra.
The text was updated successfully, but these errors were encountered: