-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
'Trainer' object has no attribute 'proc_rank' #2267
Comments
Hi! thanks for your contribution!, great first issue! |
EDIT: it is also recommended use |
but also, why do you need to check the proc rank? |
Actually I borrowed most of the code from HuggingFace's T5 finetuning script, but I guess if it's not needed then I will remove it. |
@vishal-burman do you have ddp working with transformers and pytorch-lightning==0.8.1? |
@sshleifer . I am also struggling with making ddp work. According to PyTorch documentation, there is a warning not to change model parameters after ddp construction. I wonder if that could be the case. |
can you guys post a minimal example that is breaking? in lightning we don’t change model stuff once ddp starts. maybe trnasformers is doing that? but either way, the best thing is for us to have a model or test to test against. |
QUICK FIX: if your are training your model on a single GPU, set: def is_logger(self):
return True |
🐛 Bug
1st epoch runs to completion and the above error is thrown in the is_logger() method.
To Reproduce
Code sample
Expected behavior
Environment
Please copy and paste the output from our
environment collection script
(or fill out the checklist below manually).
You can get the script and run it with:
conda
,pip
, source): pipAdditional context
This code does run in 0.7.6 version, but it breaks in the latest release
The text was updated successfully, but these errors were encountered: