Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

resolving documentation warnings #833

Merged
merged 35 commits into from
Feb 27, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
1388e06
add more underline
hanbyul-kim Feb 10, 2020
3d084ce
fix LightningMudule import error
hanbyul-kim Feb 10, 2020
49b43f3
remove unneeded blank line
hanbyul-kim Feb 10, 2020
59b5c4f
escape asterisk to fix inline emphasis warning
hanbyul-kim Feb 10, 2020
f58573e
add PULL_REQUEST_TEMPLATE.md
hanbyul-kim Feb 10, 2020
68f8b5c
add __init__.py and import imagenet_example
hanbyul-kim Feb 10, 2020
161fc38
fix duplicate label
hanbyul-kim Feb 10, 2020
6d8fe9d
add noindex option to fix duplicate object warnings
hanbyul-kim Feb 11, 2020
f9aff2c
remove unexpected indent
hanbyul-kim Feb 11, 2020
52ccb99
refer explicit LightningModule
hanbyul-kim Feb 11, 2020
8fc8d42
fix minor bug
hanbyul-kim Feb 11, 2020
4a24579
refer EarlyStopping explicitly
hanbyul-kim Feb 11, 2020
3479c59
restore exclude patterns
hanbyul-kim Feb 11, 2020
effce0f
change the way how to refer class
hanbyul-kim Feb 12, 2020
bc39a4a
remove unused import
hanbyul-kim Feb 12, 2020
d7fa93d
update badges & drop Travis/Appveyor (#826)
Borda Feb 14, 2020
10b3830
fix missing PyPI images & CI badges (#853)
Borda Feb 16, 2020
cd5136a
docs - anchor links (#848)
Borda Feb 16, 2020
b66700f
add Greeting action (#843)
Borda Feb 16, 2020
09b060e
add pep8speaks (#842)
Borda Feb 16, 2020
28f6de2
advanced profiler describe + cleaned up tests (#837)
jeremyjordan Feb 16, 2020
4fed7f5
Update lightning_module_template.py
williamFalcon Feb 16, 2020
889a96e
Update lightning.py
williamFalcon Feb 16, 2020
7a776f5
respond lint issues
hanbyul-kim Feb 16, 2020
eaf1fcd
break long line
hanbyul-kim Feb 16, 2020
6bceec1
break more lines
hanbyul-kim Feb 16, 2020
794cf17
checkout conflicting files from master
hanbyul-kim Feb 17, 2020
f45bc4d
shorten url
hanbyul-kim Feb 17, 2020
1c4e78b
checkout from upstream/master
hanbyul-kim Feb 17, 2020
88b153f
remove trailing whitespaces
hanbyul-kim Feb 17, 2020
c852661
remove unused import LightningModule
hanbyul-kim Feb 22, 2020
80e4bd7
fix sphinx bot warnings
hanbyul-kim Feb 23, 2020
0164c24
Merge branch 'master' into hotfix/remove_warnings
Borda Feb 26, 2020
2c4931f
Apply suggestions from code review
Borda Feb 26, 2020
dabf843
Update .github/workflows/greetings.yml
Borda Feb 27, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/greetings.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ jobs:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
issue-message: 'Hey, thanks for your contribution! Great first issue!'
pr-message: 'Hey, thanks for the input! Please give us a bit of time to review it!'
issue-message: 'Hi! thanks for your contribution!, great first issue!'
pr-message: 'Hey thanks for the input! Please give us a bit of time to review it!'
2 changes: 1 addition & 1 deletion .pep8speaks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ scanner:
linter: pycodestyle # Other option is flake8

pycodestyle: # Same as scanner.linter value. Other option is flake8
max-line-length: 120 # Default is 79 in PEP 8
max-line-length: 100 # Default is 79 in PEP 8
ignore: # Errors and warnings to ignore
- W504 # line break after binary operator
- E402 # module level import not at top of file
Expand Down
3 changes: 2 additions & 1 deletion docs/source/callbacks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,12 @@
Callbacks
=========
.. automodule:: pytorch_lightning.callbacks
:noindex:
:exclude-members:
_del_model,
_save_model,
on_epoch_end,
on_train_end,
on_epoch_start,
check_monitor_top_k,
on_train_start,
on_train_start,
2 changes: 1 addition & 1 deletion docs/source/checkpointing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ If you want to pick up training from where you left off, you have a few options.
trainer = Trainer(logger=logger)
trainer.fit(model)

2. A second option is to pass in a path to a checkpoint (see: :ref:`pytorch_lightning.trainer`).
2. A second option is to pass in a path to a checkpoint (see: :ref:`pytorch_lightning.trainer.trainer.Trainer`).

.. code-block:: python

Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -355,10 +355,10 @@ def find_source():
autodoc_default_options = {
'members': None,
'special-members': '__call__',
'undoc-members': True,
# 'exclude-members': '__weakref__',
'show-inheritance': True,
'private-members': True,
'noindex': True,
}

# Sphinx will add “permalinks” for each heading and description environment as paragraph signs that
Expand Down
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,7 @@ PyTorch-Lightning Documentation
CODE_OF_CONDUCT.md
CONTRIBUTING.md
BECOMING_A_CORE_CONTRIBUTOR.md
PULL_REQUEST_TEMPLATE.md
governance.md

Indices and tables
Expand Down
1 change: 1 addition & 0 deletions docs/source/lightning-module.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ LightningModule
===============

.. automodule:: pytorch_lightning.core
:noindex:
:exclude-members:
_abc_impl,
summarize,
Expand Down
1 change: 1 addition & 0 deletions docs/source/loggers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
Loggers
===========
.. automodule:: pytorch_lightning.loggers
:noindex:
:exclude-members:
_abc_impl,
_save_model,
Expand Down
6 changes: 3 additions & 3 deletions docs/source/optimizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ Lightning will call each optimizer sequentially:


Step optimizers at arbitrary intervals
-------------------------------------
----------------------------------------
To do more interesting things with your optimizers such as learning rate warm-up or odd scheduling,
override the :meth:`optimizer_step' function.
override the :meth:`optimizer_step` function.

For example, here step optimizer A every 2 batches and optimizer B every 4 batches

Expand Down Expand Up @@ -96,4 +96,4 @@ Here we add a learning-rate warm up

# update params
optimizer.step()
optimizer.zero_grad()
optimizer.zero_grad()
3 changes: 2 additions & 1 deletion docs/source/profiler.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,9 @@


Performance and Bottleneck Profiler
===========
===================================
.. automodule:: pytorch_lightning.profiler
:noindex:
:exclude-members:
_abc_impl,
summarize,
1 change: 1 addition & 0 deletions docs/source/trainer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ Trainer

.. automodule:: pytorch_lightning.trainer
:members: fit, test
:noindex:
:exclude-members:
run_pretrain_routine,
_abc_impl,
Expand Down
4 changes: 2 additions & 2 deletions pl_examples/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@
-------------------------

In 99% of cases you want to just copy `one of the examples
<https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples>`_
to start a new lightningModule and change the core of what your model is actually trying to do.
<https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples>`_
to start a new lightningModule and change the core of what your model is actually trying to do.

.. code-block:: bash

Expand Down
5 changes: 3 additions & 2 deletions pl_examples/basic_examples/lightning_module_template.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,11 @@
from torch.utils.data.distributed import DistributedSampler
from torchvision.datasets import MNIST

import pytorch_lightning as pl
from pytorch_lightning.core import LightningModule
from pytorch_lightning.core import data_loader


class LightningTemplateModel(pl.LightningModule):
class LightningTemplateModel(LightningModule):
"""
Sample model to show how to define a template
"""
Expand Down
10 changes: 6 additions & 4 deletions pl_examples/domain_templates/gan.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@
from torch.utils.data import DataLoader
from torchvision.datasets import MNIST

import pytorch_lightning as pl
from pytorch_lightning.core import LightningModule
from pytorch_lightning.core import data_loader
from pytorch_lightning.trainer import Trainer


class Generator(nn.Module):
Expand Down Expand Up @@ -69,7 +71,7 @@ def forward(self, img):
return validity


class GAN(pl.LightningModule):
class GAN(LightningModule):

def __init__(self, hparams):
super(GAN, self).__init__()
Expand Down Expand Up @@ -165,7 +167,7 @@ def configure_optimizers(self):
opt_d = torch.optim.Adam(self.discriminator.parameters(), lr=lr, betas=(b1, b2))
return [opt_g, opt_d], []

@pl.data_loader
@data_loader
def train_dataloader(self):
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize([0.5], [0.5])])
Expand Down Expand Up @@ -193,7 +195,7 @@ def main(hparams):
# ------------------------
# 2 INIT TRAINER
# ------------------------
trainer = pl.Trainer()
trainer = Trainer()

# ------------------------
# 3 START TRAINING
Expand Down
Empty file.
Empty file.
12 changes: 8 additions & 4 deletions pl_examples/full_examples/imagenet/imagenet_example.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@
import torchvision.transforms as transforms

import pytorch_lightning as pl
from pytorch_lightning.core import LightningModule
from pytorch_lightning.core import data_loader

# pull out resnet names from torchvision models
MODEL_NAMES = sorted(
Expand All @@ -27,9 +29,11 @@
)


class ImageNetLightningModel(pl.LightningModule):

class ImageNetLightningModel(LightningModule):
def __init__(self, hparams):
"""
TODO: add docstring here
"""
super(ImageNetLightningModel, self).__init__()
self.hparams = hparams
self.model = models.__dict__[self.hparams.arch](pretrained=self.hparams.pretrained)
Expand Down Expand Up @@ -128,7 +132,7 @@ def configure_optimizers(self):
scheduler = lr_scheduler.ExponentialLR(optimizer, gamma=0.1)
return [optimizer], [scheduler]

@pl.data_loader
@data_loader
def train_dataloader(self):
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
Expand Down Expand Up @@ -159,7 +163,7 @@ def train_dataloader(self):
)
return train_loader

@pl.data_loader
@data_loader
def val_dataloader(self):
normalize = transforms.Normalize(
mean=[0.485, 0.456, 0.406],
Expand Down
29 changes: 15 additions & 14 deletions pytorch_lightning/core/lightning.py
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,6 @@ def validation_step(self, batch, batch_idx, dataset_idx):

def test_step(self, *args, **kwargs):
"""return whatever outputs will need to be aggregated in test_end

:param batch: The output of your dataloader. A tensor, tuple or list
:param int batch_idx: Integer displaying which batch this is
:param int dataloader_idx: Integer displaying which dataloader this is (only if multiple test datasets used)
Expand All @@ -381,11 +380,13 @@ def test_step(self, batch, batch_idx, dataloader_idxdx)


**OPTIONAL**
If you don't need to test you don't need to implement this method. In this step you'd normally
generate examples or calculate anything of interest such as accuracy.
If you don't need to test you don't need to implement this method.
In this step you'd normally generate examples or
calculate anything of interest such as accuracy.

When the validation_step is called, the model has been put in eval mode and PyTorch gradients
have been disabled. At the end of validation, model goes back to training mode and gradients are enabled.
When the validation_step is called, the model has been put in eval mode
and PyTorch gradients have been disabled.
At the end of validation, model goes back to training mode and gradients are enabled.

The dict you return here will be available in the `test_end` method.

Expand Down Expand Up @@ -578,7 +579,7 @@ def configure_ddp(self, model, device_ids):
3. On a testing batch, the call goes to model.test_step

Args:
model (LightningModule): the LightningModule currently being optimized
model (:class:`.LightningModule`): the LightningModule currently being optimized
device_ids (list): the list of GPU ids

Return:
Expand Down Expand Up @@ -692,7 +693,7 @@ def configure_apex(self, amp, model, optimizers, amp_level):

Args:
amp (object): pointer to amp library object
model (LightningModule): pointer to current lightningModule
model (:class:`.LightningModule`): pointer to current lightningModule
optimizers (list): list of optimizers passed in configure_optimizers()
amp_level (str): AMP mode chosen ('O1', 'O2', etc...)

Expand Down Expand Up @@ -1087,7 +1088,6 @@ def val_dataloader(self):
@classmethod
def load_from_metrics(cls, weights_path, tags_csv, map_location=None):
r"""

You should use `load_from_checkpoint` instead!
However, if your .ckpt weights don't have the hyperparameters saved, use this method to pass
in a .csv with the hparams you'd like to use. These will be converted into a argparse.Namespace
Expand All @@ -1097,10 +1097,11 @@ def load_from_metrics(cls, weights_path, tags_csv, map_location=None):

weights_path (str): Path to a PyTorch checkpoint
tags_csv (str): Path to a .csv with two columns (key, value) as in this
Example::
key,value
drop_prob,0.2
batch_size,32

Example::
key,value
drop_prob,0.2
batch_size,32

map_location (dict | str | torch.device | function):
If your checkpoint saved a GPU model and you now load on CPUs
Expand Down Expand Up @@ -1163,7 +1164,7 @@ def load_from_checkpoint(cls, checkpoint_path, map_location=None):

model = MyModel(hparams)

class MyModel(pl.LightningModule):
class MyModel(LightningModule):
def __init__(self, hparams):
self.learning_rate = hparams.learning_rate

Expand All @@ -1172,7 +1173,7 @@ def __init__(self, hparams):
# when using a dict
model = MyModel({'learning_rate': 0.1})

class MyModel(pl.LightningModule):
class MyModel(LightningModule):
def __init__(self, hparams):
self.learning_rate = hparams['learning_rate']

Expand Down
4 changes: 3 additions & 1 deletion pytorch_lightning/core/memory.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,15 +277,17 @@ def get_human_readable_count(number):
"""
Abbreviates an integer number with K, M, B, T for thousands, millions,
billions and trillions, respectively.

Examples:
123 -> 123
1234 -> 1 K (one thousand)
2e6 -> 2 M (two million)
3e9 -> 3 B (three billion)
4e12 -> 4 T (four trillion)
5e15 -> 5,000 T

:param number: a positive integer number
:returns a string formatted according to the pattern described above.
:return: a string formatted according to the pattern described above.
"""
assert number >= 0
labels = [' ', 'K', 'M', 'B', 'T']
Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/loggers/mlflow.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
Log using `mlflow <https://mlflow.org>'_
Log using `mlflow <https://mlflow.org>`_

.. code-block:: python

Expand Down
6 changes: 4 additions & 2 deletions pytorch_lightning/loggers/neptune.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,8 +103,10 @@ def any_lightning_module_function_or_hook(...):
Must be list of str or single str. Uploaded sources are displayed in the experiment’s Source code tab.
If None is passed, Python file from which experiment was created will be uploaded.
Pass empty list ([]) to upload no files. Unix style pathname pattern expansion is supported.
For example, you can pass '*.py' to upload all python source files from the current directory.
For recursion lookup use '**/*.py' (for Python 3.5 and later). For more information see glob library.
For example, you can pass '\*.py'
to upload all python source files from the current directory.
For recursion lookup use '\**/\*.py' (for Python 3.5 and later).
For more information see glob library.
params (dict|None): Optional. Parameters of the experiment. After experiment creation params are read-only.
Parameters are displayed in the experiment’s Parameters section and each key-value pair can be
viewed in experiments view as a column.
Expand Down
12 changes: 6 additions & 6 deletions pytorch_lightning/trainer/distrib_parts.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,14 +269,14 @@
-------------------------

Instead of manually building SLURM scripts, you can use the
`SlurmCluster object <https://williamfalcon.github.io/test-tube/hpc/SlurmCluster>`_
to do this for you. The SlurmCluster can also run a grid search if you pass
in a `HyperOptArgumentParser
<https://williamfalcon.github.io/test-tube/hyperparameter_optimization/HyperOptArgumentParser>`_.
`SlurmCluster object <https://williamfalcon.github.io/test-tube/hpc/SlurmCluster>`_
to do this for you. The SlurmCluster can also run a grid search if you pass
in a `HyperOptArgumentParser
<https://williamfalcon.github.io/test-tube/hyperparameter_optimization/HyperOptArgumentParser>`_.

Here is an example where you run a grid search of 9 combinations of hyperparams.
The full examples are `here
<https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples/new_project_templates/multi_node_examples>`_.
The full examples are
`here <https://git.io/Jv87p>`_.

.. code-block:: python

Expand Down
7 changes: 4 additions & 3 deletions pytorch_lightning/trainer/trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,8 +153,9 @@ def __init__(

trainer = Trainer(checkpoint_callback=checkpoint_callback)

early_stop_callback: Callback for early stopping. If
set to ``True``, then the default callback monitoring ``'val_loss'`` is created.
early_stop_callback (:class:`pytorch_lightning.callbacks.EarlyStopping`):
Callback for early stopping.
If set to ``True``, then the default callback monitoring ``'val_loss'`` is created.
Will raise an error if ``'val_loss'`` is not found.
If set to ``False``, then early stopping will be disabled.
If set to ``None``, then the default callback monitoring ``'val_loss'`` is created.
Expand Down Expand Up @@ -1168,7 +1169,7 @@ def test(self, model: Optional[LightningModule] = None):
Separates from fit to make sure you never run on your test set until you want to.

Args:
model: The model to test.
model (:class:`.LightningModule`): The model to test.

Example::

Expand Down
2 changes: 1 addition & 1 deletion pytorch_lightning/trainer/training_loop.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,7 +114,7 @@

When using PackedSequence, do 2 things:
1. return either a padded tensor in dataset or a list of variable length tensors
in the dataloader collate_fn (example above shows the list implementation).
in the dataloader collate_fn (example above shows the list implementation).
2. Pack the sequence in forward or training and validation steps depending on use case.

.. code-block:: python
Expand Down
Loading