Skip to content

Commit 41b6cbb

Browse files
karlinjfjkarlin
andauthored
Don't copy the batch when training on a single gpu (#1576)
* fix * whitespace Co-authored-by: Josh Karlin <[email protected]>
1 parent 0b22b64 commit 41b6cbb

File tree

1 file changed

+5
-1
lines changed

1 file changed

+5
-1
lines changed

pytorch_lightning/trainer/training_loop.py

+5-1
Original file line numberDiff line numberDiff line change
@@ -754,7 +754,11 @@ def training_forward(self, batch, batch_idx, opt_idx, hiddens):
754754
gpu_id = 0
755755
if isinstance(self.data_parallel_device_ids, list):
756756
gpu_id = self.data_parallel_device_ids[0]
757-
batch = self.transfer_batch_to_gpu(copy.copy(batch), gpu_id)
757+
758+
# Don't copy the batch since there is a single gpu that the batch could
759+
# be referenced from and if there are multiple optimizers the batch will
760+
# wind up copying it to the same device repeatedly.
761+
batch = self.transfer_batch_to_gpu(batch, gpu_id)
758762
args[0] = batch
759763
output = self.model.training_step(*args)
760764

0 commit comments

Comments
 (0)