Skip to content

Commit 07b2197

Browse files
Merge branch 'main' into patch-1
2 parents 71d233c + d8a9749 commit 07b2197

File tree

3 files changed

+54
-11
lines changed

3 files changed

+54
-11
lines changed

advanced_source/cpp_custom_ops.rst

+49-6
Original file line numberDiff line numberDiff line change
@@ -63,9 +63,47 @@ Using ``cpp_extension`` is as simple as writing the following ``setup.py``:
6363
6464
If you need to compile CUDA code (for example, ``.cu`` files), then instead use
6565
`torch.utils.cpp_extension.CUDAExtension <https://pytorch.org/docs/stable/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension>`_.
66-
Please see how
67-
`extension-cpp <https://github.com/pytorch/extension-cpp>`_ for an example for
68-
how this is set up.
66+
Please see `extension-cpp <https://github.com/pytorch/extension-cpp>`_ for an
67+
example for how this is set up.
68+
69+
Starting with PyTorch 2.6, you can now build a single wheel for multiple CPython
70+
versions (similar to what you would do for pure python packages). In particular,
71+
if your custom library adheres to the `CPython Stable Limited API
72+
<https://docs.python.org/3/c-api/stable.html>`_ or avoids CPython entirely, you
73+
can build one Python agnostic wheel against a minimum supported CPython version
74+
through setuptools' ``py_limited_api`` flag, like so:
75+
76+
.. code-block:: python
77+
78+
from setuptools import setup, Extension
79+
from torch.utils import cpp_extension
80+
81+
setup(name="extension_cpp",
82+
ext_modules=[
83+
cpp_extension.CppExtension(
84+
"extension_cpp",
85+
["python_agnostic_code.cpp"],
86+
py_limited_api=True)],
87+
cmdclass={'build_ext': cpp_extension.BuildExtension},
88+
options={"bdist_wheel": {"py_limited_api": "cp39"}}
89+
)
90+
91+
Note that you must specify ``py_limited_api=True`` both within ``setup``
92+
and also as an option to the ``"bdist_wheel"`` command with the minimal supported
93+
Python version (in this case, 3.9). This ``setup`` would build one wheel that could
94+
be installed across multiple Python versions ``python>=3.9``. Please see
95+
`torchao <https://github.com/pytorch/ao>`_ for an example.
96+
97+
.. note::
98+
99+
You must verify independently that the built wheel is truly Python agnostic.
100+
Specifying ``py_limited_api`` does not check for any guarantees, so it is possible
101+
to build a wheel that looks Python agnostic but will crash, or worse, be silently
102+
incorrect, in another Python environment. Take care to avoid using unstable CPython
103+
APIs, for example APIs from libtorch_python (in particular pytorch/python bindings,)
104+
and to only use APIs from libtorch (aten objects, operators and the dispatcher).
105+
For example, to give access to custom ops from Python, the library should register
106+
the ops through the dispatcher (covered below!).
69107

70108
Defining the custom op and adding backend implementations
71109
---------------------------------------------------------
@@ -177,7 +215,7 @@ operator specifies how to compute the metadata of output tensors given the metad
177215
The FakeTensor kernel should return dummy Tensors of your choice with
178216
the correct Tensor metadata (shape/strides/``dtype``/device).
179217

180-
We recommend that this be done from Python via the `torch.library.register_fake` API,
218+
We recommend that this be done from Python via the ``torch.library.register_fake`` API,
181219
though it is possible to do this from C++ as well (see
182220
`The Custom Operators Manual <https://pytorch.org/docs/main/notes/custom_operators.html>`_
183221
for more details).
@@ -188,7 +226,9 @@ for more details).
188226
# before calling ``torch.library`` APIs that add registrations for the
189227
# C++ custom operator(s). The following import loads our
190228
# C++ custom operator definitions.
191-
# See the next section for more details.
229+
# Note that if you are striving for Python agnosticism, you should use
230+
# the ``load_library(...)`` API call instead. See the next section for
231+
# more details.
192232
from . import _C
193233
194234
@torch.library.register_fake("extension_cpp::mymuladd")
@@ -214,7 +254,10 @@ of two ways:
214254
1. If you're following this tutorial, importing the Python C extension module
215255
we created will load the C++ custom operator definitions.
216256
2. If your C++ custom operator is located in a shared library object, you can
217-
also use ``torch.ops.load_library("/path/to/library.so")`` to load it.
257+
also use ``torch.ops.load_library("/path/to/library.so")`` to load it. This
258+
is the blessed path for Python agnosticism, as you will not have a Python C
259+
extension module to import. See `torchao __init__.py <https://github.com/pytorch/ao/blob/881e84b4398eddcea6fee4d911fc329a38b5cd69/torchao/__init__.py#L26-L28>`_
260+
for an example.
218261

219262

220263
Adding training (autograd) support for an operator

advanced_source/python_custom_ops.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
"""
44
.. _python-custom-ops-tutorial:
55
6-
Python Custom Operators
6+
Custom Python Operators
77
=======================
88
99
.. grid:: 2

index.rst

+4-4
Original file line numberDiff line numberDiff line change
@@ -397,14 +397,14 @@ Welcome to PyTorch Tutorials
397397
:tags: Frontend-APIs,C++
398398

399399
.. customcarditem::
400-
:header: Python Custom Operators Landing Page
400+
:header: PyTorch Custom Operators Landing Page
401401
:card_description: This is the landing page for all things related to custom operators in PyTorch.
402402
:image: _static/img/thumbnails/cropped/Custom-Cpp-and-CUDA-Extensions.png
403403
:link: advanced/custom_ops_landing_page.html
404404
:tags: Extending-PyTorch,Frontend-APIs,C++,CUDA
405405

406406
.. customcarditem::
407-
:header: Python Custom Operators
407+
:header: Custom Python Operators
408408
:card_description: Create Custom Operators in Python. Useful for black-boxing a Python function for use with torch.compile.
409409
:image: _static/img/thumbnails/cropped/Custom-Cpp-and-CUDA-Extensions.png
410410
:link: advanced/python_custom_ops.html
@@ -426,14 +426,14 @@ Welcome to PyTorch Tutorials
426426

427427
.. customcarditem::
428428
:header: Custom C++ and CUDA Extensions
429-
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
429+
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
430430
:image: _static/img/thumbnails/cropped/Custom-Cpp-and-CUDA-Extensions.png
431431
:link: advanced/cpp_extension.html
432432
:tags: Extending-PyTorch,Frontend-APIs,C++,CUDA
433433

434434
.. customcarditem::
435435
:header: Extending TorchScript with Custom C++ Operators
436-
:card_description: Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
436+
:card_description: Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
437437
:image: _static/img/thumbnails/cropped/Extending-TorchScript-with-Custom-Cpp-Operators.png
438438
:link: advanced/torch_script_custom_ops.html
439439
:tags: Extending-PyTorch,Frontend-APIs,TorchScript,C++

0 commit comments

Comments
 (0)