You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* show progress bar dependent on refresh_rate
* test progress_bar_refresh control show bar
* remove show_progress_bar from other tests
* borda fixes
* flake8 fix
* changelog update prog bar refresh rate
* move show_progress_bar to deprecated 0.9 api
* rm show_progress_bar references, test deprecated
* Update pytorch_lightning/trainer/__init__.py
* fix test
* changelog
* minor CHANGELOG.md format
* Update pytorch_lightning/trainer/__init__.py
* Update pytorch_lightning/trainer/trainer.py
Co-authored-by: Gerard Bentley <[email protected]>
Co-authored-by: William Falcon <[email protected]>
Co-authored-by: Jirka Borovec <[email protected]>
Co-authored-by: J. Borovec <[email protected]>
Copy file name to clipboardexpand all lines: CHANGELOG.md
+10-11
Original file line number
Diff line number
Diff line change
@@ -26,6 +26,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
26
26
27
27
### Changed
28
28
29
+
- Changed `progress_bar_refresh_rate` trainer flag to disable progress bar when set to 0. ([#1108](https://github.com/PyTorchLightning/pytorch-lightning/pull/1108))
29
30
- Enhanced `load_from_checkpoint` to also forward params to the model ([#1307](https://github.com/PyTorchLightning/pytorch-lightning/pull/1307))
30
31
- Updated references to self.forward() to instead use the `__call__` interface. ([#1211](https://github.com/PyTorchLightning/pytorch-lightning/pull/1211))
31
32
- Added option to run without an optimizer by returning `None` from `configure_optimizers`. ([#1279](https://github.com/PyTorchLightning/pytorch-lightning/pull/1279))
@@ -44,6 +45,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
@@ -72,9 +74,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
72
74
73
75
### Added
74
76
75
-
- Added automatic sampler setup. Depending on DDP or TPU, lightning configures the sampler correctly (user needs to do nothing) ([#926](https://github.com/PyTorchLightning/pytorch-lightning/pull/926))
76
-
- Added `reload_dataloaders_every_epoch=False` flag for trainer. Some users require reloading data every epoch ([#926](https://github.com/PyTorchLightning/pytorch-lightning/pull/926))
77
-
- Added `progress_bar_refresh_rate=50` flag for trainer. Throttle refresh rate on notebooks ([#926](https://github.com/PyTorchLightning/pytorch-lightning/pull/926))
77
+
- Added automatic sampler setup. Depending on DDP or TPU, lightning configures the sampler correctly (user needs to do nothing) ([#926](https://github.com/PyTorchLightning/pytorch-lightning/pull/926))
78
+
- Added `reload_dataloaders_every_epoch=False` flag for trainer. Some users require reloading data every epoch ([#926](https://github.com/PyTorchLightning/pytorch-lightning/pull/926))
79
+
- Added `progress_bar_refresh_rate=50` flag for trainer. Throttle refresh rate on notebooks ([#926](https://github.com/PyTorchLightning/pytorch-lightning/pull/926))
78
80
- Updated governance docs
79
81
- Added a check to ensure that the metric used for early stopping exists before training commences ([#542](https://github.com/PyTorchLightning/pytorch-lightning/pull/542))
80
82
- Added `optimizer_idx` argument to `backward` hook ([#733](https://github.com/PyTorchLightning/pytorch-lightning/pull/733))
@@ -97,7 +99,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
- Added max/min number of steps in `Trainer` ([#728](https://github.com/PyTorchLightning/pytorch-lightning/pull/728))
99
101
100
-
101
102
### Changed
102
103
103
104
- Improved `NeptuneLogger` by adding `close_after_fit` argument to allow logging after training([#908](https://github.com/PyTorchLightning/pytorch-lightning/pull/1084))
@@ -109,17 +110,17 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
109
110
- Freezed models `hparams` as `Namespace` property ([#1029](https://github.com/PyTorchLightning/pytorch-lightning/pull/1029))
110
111
- Dropped `logging` config in package init ([#1015](https://github.com/PyTorchLightning/pytorch-lightning/pull/1015))
111
112
- Renames model steps ([#1051](https://github.com/PyTorchLightning/pytorch-lightning/pull/1051))
- Deprecated `LightningModule.load_from_metrics` in favour of `LightningModule.load_from_checkpoint` ([#995](https://github.com/PyTorchLightning/pytorch-lightning/pull/995), [#1079](https://github.com/PyTorchLightning/pytorch-lightning/pull/1079))
- Deprecated model steps `training_end`, `validation_end` and `test_end` ([#1051](https://github.com/PyTorchLightning/pytorch-lightning/pull/1051), [#1056](https://github.com/PyTorchLightning/pytorch-lightning/pull/1056))
124
125
125
126
### Removed
@@ -309,9 +310,7 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
309
310
310
311
### Added
311
312
312
-
- Added the flag `log_gpu_memory` to `Trainer` to deactivate logging of GPU
313
-
memory utilization
314
-
- Added SLURM resubmit functionality (port from test-tube)
313
+
- Added the flag `log_gpu_memory` to `Trainer` to deactivate logging of GPU memory utilization
315
314
- Added optional weight_save_path to trainer to remove the need for a checkpoint_callback when using cluster training
316
315
- Added option to use single gpu per node with `DistributedDataParallel`
0 commit comments