Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tutorial for AOTI Python runtime #2997

Merged
merged 26 commits into from
Aug 23, 2024
Merged
Changes from 1 commit
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
1dea278
Tutorial for AOTI Python runtime
agunapal Aug 12, 2024
cd09129
Apply suggestions from code review
agunapal Aug 13, 2024
3fa9b20
Addressed review comments and added a section on why AOTI Python
agunapal Aug 13, 2024
7c9edb7
Addressed review comments and added a section on why AOTI Python
agunapal Aug 13, 2024
9cba6fb
fixed spelling
agunapal Aug 13, 2024
a6f6cd9
fixed spelling
agunapal Aug 13, 2024
1375373
Apply suggestions from code review
agunapal Aug 16, 2024
7158985
Addressed review comment
agunapal Aug 16, 2024
53f5965
Changing to use g5.4xlarge machine
agunapal Aug 19, 2024
849c8e3
Merge branch 'main' into tutorial/aoti_python
agunapal Aug 19, 2024
4aa8399
Moved tutorial to recipe
agunapal Aug 19, 2024
39b3942
Merge branch 'tutorial/aoti_python' of https://github.com/agunapal/tu…
agunapal Aug 19, 2024
35c5dc8
addressed review comments
agunapal Aug 19, 2024
71acd96
Moved tutorial to recipe
agunapal Aug 19, 2024
7f5fde9
Change base image to nvidia devel image
agunapal Aug 20, 2024
790f762
Change base image to nvidia devel image
agunapal Aug 20, 2024
45df5d0
Update requirements
agunapal Aug 20, 2024
b268a3c
fixed formatting
agunapal Aug 20, 2024
b6c3a01
Merge branch 'main' into tutorial/aoti_python
agunapal Aug 20, 2024
6578d82
update to CUDA 12.4
agunapal Aug 20, 2024
9ee64d9
Merge branch 'tutorial/aoti_python' of https://github.com/agunapal/tu…
agunapal Aug 20, 2024
67bc080
Apply suggestions from code review
agunapal Aug 21, 2024
fc0ff5e
addressed review comments for formatting
agunapal Aug 21, 2024
85f2870
Update recipes_source/torch_export_aoti_python.py
svekars Aug 22, 2024
cb8ea23
Update recipes_source/torch_export_aoti_python.py
svekars Aug 22, 2024
194388e
Merge branch 'main' into tutorial/aoti_python
svekars Aug 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 110 additions & 0 deletions intermediate_source/torch_export_aoti_python.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
# -*- coding: utf-8 -*-

"""
torch.export AOT Inductor Tutorial for Python runtime
===================================================
**Author:** Ankith Gunapal
"""

######################################################################
#
# .. warning::
#
# ``torch._export.aot_compile`` and ``torch._export.aot_load`` are in Beta status and are subject to backwards compatibility
# breaking changes. This tutorial provides an example of how to use these APIs for model deployment using Python runtime.
#
# It has been shown `previously <https://pytorch.org/docs/stable/torch.compiler_aot_inductor.html#>`__ how AOTInductor can be used
# to do Ahead-of-Time compilation of PyTorch exported models by creating
# a shared library that can be run in a non-Python environment.
#
#
# In this tutorial, you will learn an end-to-end example of how to use AOTInductor for python runtime.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will make the story more complete by explaining the "why" part here, e.g. eliminating recompilation at run time, max-autotune ahead of time, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done. Haven't mentioned eliminating recompilation, since the tutorial doesn't show that

# We will look at how to use :func:`torch._export.aot_compile` to generate a shared library.
# We also look at how we can run the shared library in python runtime using :func:`torch._export.aot_load`.
#
# **Contents**
#
# .. contents::
# :local:


######################################################################
# Model Compilation
# ------------
#
# We will use TorchVision's pretrained `ResNet18` model in this example and use TorchInductor on the
# exported PyTorch program using :func:`torch._export.aot_compile`
#
# .. note::
#
# This API also supports :func:`torch.compile` options like `mode`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# This API also supports :func:`torch.compile` options like `mode`
# This API also supports :func:`torch.compile` options like ``mode`` and other.

# As an example, if used on a CUDA enabled device, we can set `"max_autotune": True`
#
# We also specify `dynamic_shapes` for the batch dimension. In this example, min=2 is not a bug and is
# explained in `The 0/1 Specialization Problem <https://docs.google.com/document/d/16VPOa3d-Liikf48teAOmxLc92rgvJdfosIy-yoT38Io/edit?fbclid=IwAR3HNwmmexcitV0pbZm_x1a4ykdXZ9th_eJWK-3hBtVgKnrkmemz6Pm5jRQ#heading=h.ez923tomjvyk>`__


import os
import torch
from torchvision.models import ResNet18_Weights, resnet18

model = resnet18(weights=ResNet18_Weights.DEFAULT)
model.eval()

with torch.inference_mode():

# Specify the generated shared library path
aot_compile_options = {
"aot_inductor.output_path": os.path.join(os.getcwd(), "resnet18_pt2.so"),
}
if torch.cuda.is_available():
device = "cuda"
aot_compile_options.update({"max_autotune": True})
else:
device = "cpu"
# We need to turn off the below optimizations to support batch_size = 16,
# which is treated like a special case
# https://github.com/pytorch/pytorch/pull/116152
torch.backends.mkldnn.set_flags(False)
torch.backends.nnpack.set_flags(False)

model = model.to(device=device)
example_inputs = (torch.randn(2, 3, 224, 224, device=device),)

# min=2 is not a bug and is explained in the 0/1 Specialization Problem
batch_dim = torch.export.Dim("batch", min=2, max=32)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe it is ok to use min=1 here, but we can't feed in an example input with batch size 1.

Copy link
Contributor Author

@agunapal agunapal Aug 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An example with batch_size 1 is usually tried often, hence I set min=2

so_path = torch._export.aot_compile(
model,
example_inputs,
# Specify the first dimension of the input x as dynamic
dynamic_shapes={"x": {0: batch_dim}},
# Specify the generated shared library path
options=aot_compile_options
)


######################################################################
# Model Inference in Python
# ------------
#
# Typically the shared object generated above is used in a non-Python environment. In PyTorch 2.3,
# we added a new API :func:`torch._export.aot_load` to load the shared library in python runtime.
# The API follows a similar structure to the :func:`torch.jit.load` API . We specify the path
# of the shared library and the device where this should be loaded.
# .. note::
#
# We specify batch_size=1 for inference and it works even though we specified min=2 in
# :func:`torch._export.aot_compile`


import os
import torch

device = "cuda" if torch.cuda.is_available() else "cpu"
model_so_path = os.path.join(os.getcwd(), "resnet18_pt2.so")

model = torch._export.aot_load(model_so_path, device)
example_inputs = (torch.randn(1, 3, 224, 224, device=device),)

with torch.inference_mode():
output = model(example_inputs)
Loading