Skip to content

Commit c9a1c81

Browse files
Merge branch 'main' into master
2 parents e1b1f4c + 6f54c88 commit c9a1c81

File tree

2 files changed

+51
-8
lines changed

2 files changed

+51
-8
lines changed

advanced_source/cpp_custom_ops.rst

+49-6
Original file line numberDiff line numberDiff line change
@@ -63,9 +63,47 @@ Using ``cpp_extension`` is as simple as writing the following ``setup.py``:
6363
6464
If you need to compile CUDA code (for example, ``.cu`` files), then instead use
6565
`torch.utils.cpp_extension.CUDAExtension <https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension>`_.
66-
Please see how
67-
`extension-cpp <https://github.com/pytorch/extension-cpp>`_ for an example for
68-
how this is set up.
66+
Please see `extension-cpp <https://github.com/pytorch/extension-cpp>`_ for an
67+
example for how this is set up.
68+
69+
Starting with PyTorch 2.6, you can now build a single wheel for multiple CPython
70+
versions (similar to what you would do for pure python packages). In particular,
71+
if your custom library adheres to the `CPython Stable Limited API
72+
<https://docs.python.org/3/c-api/stable.html>`_ or avoids CPython entirely, you
73+
can build one Python agnostic wheel against a minimum supported CPython version
74+
through setuptools' ``py_limited_api`` flag, like so:
75+
76+
.. code-block:: python
77+
78+
from setuptools import setup, Extension
79+
from torch.utils import cpp_extension
80+
81+
setup(name="extension_cpp",
82+
ext_modules=[
83+
cpp_extension.CppExtension(
84+
"extension_cpp",
85+
["python_agnostic_code.cpp"],
86+
py_limited_api=True)],
87+
cmdclass={'build_ext': cpp_extension.BuildExtension},
88+
options={"bdist_wheel": {"py_limited_api": "cp39"}}
89+
)
90+
91+
Note that you must specify ``py_limited_api=True`` both within ``setup``
92+
and also as an option to the ``"bdist_wheel"`` command with the minimal supported
93+
Python version (in this case, 3.9). This ``setup`` would build one wheel that could
94+
be installed across multiple Python versions ``python>=3.9``. Please see
95+
`torchao <https://github.com/pytorch/ao>`_ for an example.
96+
97+
.. note::
98+
99+
You must verify independently that the built wheel is truly Python agnostic.
100+
Specifying ``py_limited_api`` does not check for any guarantees, so it is possible
101+
to build a wheel that looks Python agnostic but will crash, or worse, be silently
102+
incorrect, in another Python environment. Take care to avoid using unstable CPython
103+
APIs, for example APIs from libtorch_python (in particular pytorch/python bindings,)
104+
and to only use APIs from libtorch (aten objects, operators and the dispatcher).
105+
For example, to give access to custom ops from Python, the library should register
106+
the ops through the dispatcher (covered below!).
69107

70108
Defining the custom op and adding backend implementations
71109
---------------------------------------------------------
@@ -177,7 +215,7 @@ operator specifies how to compute the metadata of output tensors given the metad
177215
The FakeTensor kernel should return dummy Tensors of your choice with
178216
the correct Tensor metadata (shape/strides/``dtype``/device).
179217

180-
We recommend that this be done from Python via the `torch.library.register_fake` API,
218+
We recommend that this be done from Python via the ``torch.library.register_fake`` API,
181219
though it is possible to do this from C++ as well (see
182220
`The Custom Operators Manual <https://pytorch.org/docs/main/notes/custom_operators.html>`_
183221
for more details).
@@ -188,7 +226,9 @@ for more details).
188226
# before calling ``torch.library`` APIs that add registrations for the
189227
# C++ custom operator(s). The following import loads our
190228
# C++ custom operator definitions.
191-
# See the next section for more details.
229+
# Note that if you are striving for Python agnosticism, you should use
230+
# the ``load_library(...)`` API call instead. See the next section for
231+
# more details.
192232
from . import _C
193233
194234
@torch.library.register_fake("extension_cpp::mymuladd")
@@ -214,7 +254,10 @@ of two ways:
214254
1. If you're following this tutorial, importing the Python C extension module
215255
we created will load the C++ custom operator definitions.
216256
2. If your C++ custom operator is located in a shared library object, you can
217-
also use ``torch.ops.load_library("/path/to/library.so")`` to load it.
257+
also use ``torch.ops.load_library("/path/to/library.so")`` to load it. This
258+
is the blessed path for Python agnosticism, as you will not have a Python C
259+
extension module to import. See `torchao __init__.py <https://github.com/pytorch/ao/blob/881e84b4398eddcea6fee4d911fc329a38b5cd69/torchao/__init__.py#L26-L28>`_
260+
for an example.
218261

219262

220263
Adding training (autograd) support for an operator

index.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -432,14 +432,14 @@ Welcome to PyTorch Tutorials
432432

433433
.. customcarditem::
434434
:header: Custom C++ and CUDA Extensions
435-
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
435+
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
436436
:image: _static/img/thumbnails/cropped/Custom-Cpp-and-CUDA-Extensions.png
437437
:link: advanced/cpp_extension.html
438438
:tags: Extending-PyTorch,Frontend-APIs,C++,CUDA
439439

440440
.. customcarditem::
441441
:header: Extending TorchScript with Custom C++ Operators
442-
:card_description: Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
442+
:card_description: Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
443443
:image: _static/img/thumbnails/cropped/Extending-TorchScript-with-Custom-Cpp-Operators.png
444444
:link: advanced/torch_script_custom_ops.html
445445
:tags: Extending-PyTorch,Frontend-APIs,TorchScript,C++

0 commit comments

Comments
 (0)