You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: rfcs/20201027-modular-tensorflow-graph-c-api.md
+34-10
Original file line number
Diff line number
Diff line change
@@ -116,7 +116,30 @@ When initializing, TensorFlow loads the plugin and registers a new graph optimiz
116
116
### Supported User Scenarios
117
117
118
118
This section describes user scenarios for plugin graph optimizer.
119
-
Plugin graph optimizer is targeting backend device specific optimization, and only one optimizer is allowed to be registered per device type, so device type will be used as key to decide whether TensorFlow proper needs to run this optimizer by checking graph device type and registered device type. To simplify multiple optimizers coordination and avoid optimization conflict, multiple optimizers cannot register to the same device type. If more than one optimizer is registered to the same device type, these optimizers's initialization would fail due to registration conflict. Users need to manually select which optimizer they want to use by unloading the conflicting plugin.
119
+
120
+
* **Supported scenario**: Each plugin can register its own graph optimizer.
121
+
122
+
Plugin graph optimizer is targeting backend device specific optimization. Proper should fully control the behaviour of plugin, plugin can register its own graph optimizer, and optimizers with other device types are not allowed. TensorFlow proper would run plugin optimizer if graph device type and registered device type are matched.
Scenario1: Each plugin registers its own graph optimizer
127
+
</p>
128
+
129
+
* **Unsupported scenario**: Plugin can not register multiple graph optimizers.
130
+
131
+
To simplify multiple optimizers coordination and avoid optimization conflict, multiple optimizers cannot register to the same device type. If more than one optimizer is registered to the same device type, these optimizers's initialization would fail due to registration conflict. Users need to manually select which optimizer they want to use by unloading the conflicting plugin.
Scenario3: Registering graph optimizer without pluggable device
142
+
</p>
120
143
121
144
### Front-end python use case
122
145
@@ -131,9 +154,9 @@ Flag `use_plugin_optimizers` is provided for front-end python users to control t
131
154
```
132
155
133
156
This API can be used to:
134
-
1. Turn on/off all registered plugin graph optimizers. By default, the registered optimizers are turned on, users can turn off them. If the registered optimizers are turned on and the graph device type is matched with registered device type, they would be runnning.
135
-
2. Use recommended configuration of existing optimizers.
136
-
If pluggable graph optimizer is registered to a device type, e.g., GPU, it is optional for plugin authors to provide a recommended configuration indicate whether some of existing optimizers in proper can be turned on/off, by populating flags in `TP_OptimizerRegistrationParams`.
157
+
* Turn on/off all registered plugin graph optimizers. By default, the registered optimizers are turned on, users can turn off them. If the registered optimizers are turned on and the graph device type is matched with registered device type, they would be runnning.
158
+
* Use recommended configuration of existing optimizers.
159
+
If pluggable graph optimizer is registered to a device type, e.g., GPU, it is optional for plugin authors to provide a recommended configuration indicate whether some of existing optimizers in proper can be turned on/off, by populating flags in `TP_OptimizerRegistrationParams`.
137
160
138
161
```cpp
139
162
TF_Bool get_remapping() { return false; }
@@ -208,11 +231,11 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
0 commit comments