-
-
Notifications
You must be signed in to change notification settings - Fork 640
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port "bert multi lingual tpu training (8 cores)" to Ignite #960
Comments
I'd like to work on it , should we use the same dataset?? and the same BERT model ?? I think the purpose of this is to make users comfortable using ignite with multicore TPU. Maybe something like this with added featuers of ignite?? PyTorch on Cloud TPUs: MultiCore Training AlexNet on Fashion MNIST . |
@ahmedo42 thanks for asking. I agree that the purpose is more about Ignite and multiple TPUs which is more or less covered here : https://github.com/pytorch/ignite/tree/master/examples/contrib/cifar10#colab-on-8-tpus An NLP example on multiple TPUs possible trained on Kaggle TPUv3 (vs TPUv2 on colab) could be still nice to have in addition to cifar10 example. What do you think ? |
Didn't really know the difference between TPU's on Kaggle and On Colab 😃 .
Totally agree , an NLP example is needed , so it should a ported notebook in |
Well, I'm hesitating between two:
Do you have any NLP background to suggest what would be more interesting to have here ? |
well , notebooks could be extended too , people fork notebooks and extend them all the time on Kaggle
This seems like a best practice , almost all of the huggingface examples are scripts which allow for a higher degree of control from the user's perspective and probably we should do that
Well , I think we really need a Transformer example since it's a huge trend and it's the De Facto in NLP right now , so porting an example from huggingface would be a good idea. |
Sounds good ! |
🚀 Feature
Recently included example of TPU usage with Ignite includes a training on a single TPU.
Idea is to port this kaggle kernel: https://www.kaggle.com/abhishek/bert-multi-lingual-tpu-training-8-cores to Ignite and include it in Ignite's show-case
Based on #952 (comment)
The text was updated successfully, but these errors were encountered: