Skip to content

Commit 0cfdfd7

Browse files
committed
Update user scenarios
1 parent a8d42e7 commit 0cfdfd7

File tree

4 files changed

+34
-10
lines changed

4 files changed

+34
-10
lines changed

rfcs/20201027-modular-tensorflow-graph-c-api.md

+34-10
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,30 @@ When initializing, TensorFlow loads the plugin and registers a new graph optimiz
116116
### Supported User Scenarios
117117
118118
This section describes user scenarios for plugin graph optimizer.
119-
Plugin graph optimizer is targeting backend device specific optimization, and only one optimizer is allowed to be registered per device type, so device type will be used as key to decide whether TensorFlow proper needs to run this optimizer by checking graph device type and registered device type. To simplify multiple optimizers coordination and avoid optimization conflict, multiple optimizers cannot register to the same device type. If more than one optimizer is registered to the same device type, these optimizers's initialization would fail due to registration conflict. Users need to manually select which optimizer they want to use by unloading the conflicting plugin.
119+
120+
* **Supported scenario**: Each plugin can register its own graph optimizer.
121+
122+
Plugin graph optimizer is targeting backend device specific optimization. Proper should fully control the behaviour of plugin, plugin can register its own graph optimizer, and optimizers with other device types are not allowed. TensorFlow proper would run plugin optimizer if graph device type and registered device type are matched.
123+
124+
<p align="center">
125+
<img src="20201027-modular-tensorflow-graph-c-api/scenario1.png" height="100"/>
126+
Scenario1: Each plugin registers its own graph optimizer
127+
</p>
128+
129+
* **Unsupported scenario**: Plugin can not register multiple graph optimizers.
130+
131+
To simplify multiple optimizers coordination and avoid optimization conflict, multiple optimizers cannot register to the same device type. If more than one optimizer is registered to the same device type, these optimizers's initialization would fail due to registration conflict. Users need to manually select which optimizer they want to use by unloading the conflicting plugin.
132+
<p align="center">
133+
<img src="20201027-modular-tensorflow-graph-c-api/scenario2.png" height="150"/>
134+
Scenario2: Plugin registers multiple graph optimizers
135+
</p>
136+
137+
* **Undefined scenario**: Registering graph optimizer without pluggable device.
138+
139+
<p align="center">
140+
<img src="20201027-modular-tensorflow-graph-c-api/scenario3.png" height="100"/>
141+
Scenario3: Registering graph optimizer without pluggable device
142+
</p>
120143
121144
### Front-end python use case
122145
@@ -131,9 +154,9 @@ Flag `use_plugin_optimizers` is provided for front-end python users to control t
131154
```
132155

133156
This API can be used to:
134-
1. Turn on/off all registered plugin graph optimizers. By default, the registered optimizers are turned on, users can turn off them. If the registered optimizers are turned on and the graph device type is matched with registered device type, they would be runnning.
135-
2. Use recommended configuration of existing optimizers.
136-
If pluggable graph optimizer is registered to a device type, e.g., GPU, it is optional for plugin authors to provide a recommended configuration indicate whether some of existing optimizers in proper can be turned on/off, by populating flags in `TP_OptimizerRegistrationParams`.
157+
* Turn on/off all registered plugin graph optimizers. By default, the registered optimizers are turned on, users can turn off them. If the registered optimizers are turned on and the graph device type is matched with registered device type, they would be runnning.
158+
* Use recommended configuration of existing optimizers.
159+
If pluggable graph optimizer is registered to a device type, e.g., GPU, it is optional for plugin authors to provide a recommended configuration indicate whether some of existing optimizers in proper can be turned on/off, by populating flags in `TP_OptimizerRegistrationParams`.
137160

138161
```cpp
139162
TF_Bool get_remapping() { return false; }
@@ -208,11 +231,11 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
208231
void* ext; // reserved for future use
209232
void* (*create_func)();
210233
void (*optimize_func)(void*, TF_Buffer*, TF_Buffer*);
211-
void (*delete_func)(void*);
234+
void (*destory_func)(void*);
212235
} TP_Optimizer;
213236
214237
#define TP_OPTIMIZER_STRUCT_SIZE \
215-
TF_OFFSET_OF_END(TP_Optimizer, delete_func)
238+
TF_OFFSET_OF_END(TP_Optimizer, destory_func)
216239
217240
typedef struct TP_OptimizerRegistrationParams {
218241
size_t struct_size;
@@ -239,6 +262,7 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
239262
```
240263

241264
* **Plugin util C API**
265+
242266
```cpp
243267
#ifdef __cplusplus
244268
extern "C" {
@@ -330,12 +354,12 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
330354

331355
// Get a list of input OpInfo::TensorProperties given node name.
332356
// OpInfo::TensorProperties is represented as TF_Buffer*.
333-
void TF_GetInputProperties(TF_GraphProperties* g_prop, const char* name,
357+
void TF_GetInputPropertiesList(TF_GraphProperties* g_prop, const char* name,
334358
TF_Buffer** prop, int max_size);
335359

336360
// Get a list of output OpInfo::TensorProperties given node name.
337361
// OpInfo::TensorProperties is represented as TF_Buffer*.
338-
void TF_GetOutputProperties(TF_GraphProperties* g_prop, const char* name,
362+
void TF_GetOutputPropertiesList(TF_GraphProperties* g_prop, const char* name,
339363
TF_Buffer** prop, int max_size);
340364

341365
// Helper to maintain a map between function names in a given
@@ -395,7 +419,7 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
395419
for (int i = 0; i < max_size; i++) {
396420
in_prop_buf[i] = TF_NewBuffer();
397421
}
398-
TF_GetInputProperties(g_prop, "node1", in_prop_buf.data(), &max_size);
422+
TF_GetInputPropertiesList(g_prop, "node1", in_prop_buf.data(), &max_size);
399423
plugin::OpInfo::TensorProperties in_prop;
400424
plugin::BufferToMessage(in_prop_buf, in_prop);
401425
for (int i = 0; i < max_size; i++)
@@ -436,7 +460,7 @@ If pluggable graph optimizer is registered to a device type, e.g., GPU, it is op
436460
// Set functions to create a new optimizer.
437461
params->optimizer->create_func = P_Create;
438462
params->optimizer->optimize_func = P_Optimize;
439-
params->optimizer->delete_func = P_Delete;
463+
params->optimizer->destory_func = P_Destory;
440464
}
441465
```
442466
Loading
Loading
Loading

0 commit comments

Comments
 (0)