-
Notifications
You must be signed in to change notification settings - Fork 346
[FT] Support local_sgd / diloco in titan #1122
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
7d1a96d
to
3c0a9f8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like we can share the training loop? We basically just add another context manager. Additionally, the training_method
is added to the main JobConfig
. so we shouldn't create an additional train.py
.
If we don't want to expose this feature as the main feature yet, then we have to move training_method
configuration to experiments/fault_tolerance/train.py
as well. You can check torchtitan/tests/unit_tests/test_job_config.py
to find out the example.
model=self.model_parts[0], | ||
optimizer=self.optimizers, | ||
sync_every=2, | ||
) as semi_sync_training: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that we are not using this variable. We don't need as semi_sync_training
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah we dont need it. I can remove
@@ -502,6 +502,13 @@ class FaultTolerance: | |||
min_replica_size: int = 1 | |||
"""The minimum number of FT replica for each step.""" | |||
|
|||
training_method: str | None = "diloco" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
semi_sync_method
or synchronize_method
are less confusing. trianing_method
is pretty general. And can we specify that if the value is not set, we will use synchronized training?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
semi_sync_method
sounds good. I will also update that comment
Correct @fegin, we can share the training loop. My main worry was making the regular training loop harder to read by adding this, but yeah you are right that the configs are already in JobConfig. The alternative is to ask the user to use |
I think if we can do it with the same train loop it'd be nice -- maybe we can use ExitStack for cleaner optional registration of the context managers? |
I'm okay either way. Since TorchFT is already in the core TorchTitan, I think it is okay to put the semi-sync to the main training loop. If you want to put it in the experiment folder, |
Depends on torchft changes:
This PR adds a new "FtTrainer" to the torchft experiments folder as well as a context manager which wraps around the train loop to run local sgd or diloco. It also adds a config property to set the training method.
To run (need 3 different terminals):
Start torchft lighthouse (terminal 1):
Start replica 1 (terminal 2, update lighthouse URL):
Start replica 2 (terminal 3, update lighthouse URL):