You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. If you're following this tutorial, importing the Python C extension module
215
255
we created will load the C++ custom operator definitions.
216
256
2. If your C++ custom operator is located in a shared library object, you can
217
-
also use ``torch.ops.load_library("/path/to/library.so")`` to load it.
257
+
also use ``torch.ops.load_library("/path/to/library.so")`` to load it. This
258
+
is the blessed path for Python agnosticism, as you will not have a Python C
259
+
extension module to import. See `torchao __init__.py <https://github.com/pytorch/ao/blob/881e84b4398eddcea6fee4d911fc329a38b5cd69/torchao/__init__.py#L26-L28>`_
260
+
for an example.
218
261
219
262
220
263
Adding training (autograd) support for an operator
Copy file name to clipboardexpand all lines: index.rst
+2-2
Original file line number
Diff line number
Diff line change
@@ -432,14 +432,14 @@ Welcome to PyTorch Tutorials
432
432
433
433
.. customcarditem::
434
434
:header: Custom C++ and CUDA Extensions
435
-
:card_description:Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
435
+
:card_description: Create a neural network layer with no parameters using numpy. Then use scipy to create a neural network layer that has learnable weights.
:header: Extending TorchScript with Custom C++ Operators
442
-
:card_description:Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
442
+
:card_description: Implement a custom TorchScript operator in C++, how to build it into a shared library, how to use it in Python to define TorchScript models and lastly how to load it into a C++ application for inference workloads.
0 commit comments