-
Notifications
You must be signed in to change notification settings - Fork 43
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar questions and thoughts as in #22
## Boundaries between keras-cv and Tensorflow Addons | ||
|
||
- Highly experimental modeling, layers, losses, etc, live in addons. | ||
- Components from addons will graduate to Model Garden, given it incurs more usage, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question regarding Addons graduation as in the NLP RFC
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same response as NLP scoping doc.
|
||
## Dependencies | ||
|
||
- Tensorflow version >= 2.4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we elaborate on why 2.4 is the minimum version to utilize this library?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same response as NLP scoping doc.
Specifically, for Object Detection tasks, `keras-cv` will include most anchor-based modules: | ||
|
||
- Common objects such as anchor generator, box matcher. | ||
- Keras layer components such as ROI generator, NMS postprocessor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar question about custom op kernels as in the NLP RFC. NMS postprocessor has a few custom op implementations in TF core I believe.. but there may be new custom-ops needed in a CV repo like this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NMS is mostly supported (through 5 different versions). I agree there maybe new custom-ops needed, I believe one thing we haven't mentioned that @bhack asked is what things should go to keras-cv and what goes to tf core. In this specific case, graduating tfa custom ops to tf core, not keras, seems a better option IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Main question probably alteady done in another ticket: are these repositories going to host c++ code?
Cause "Keras world" was historically python only.
More in genetsl see also our old thread about custom ops at tensorflow/addons#1752 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am asking this also by an inference point of view. As we could see from the Google mediapipe experience about putting end2end TF model "in production" on different platforms (e.g. also on devices where the python interpreter is not available) it still require many c++ calculators with impl sometime depending on external c++ library like OpenCV etc...
Just to make an example see c++ calculators code in the image folder: https://github.com/google/mediapipe/tree/master/mediapipe/calculators/image
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the philosophy here is, we cannot guarantee everything has a tf op. This is not scalable, and not aligned with our future endeavor such as MLIR as well. Instead we should allow compiler to interpret them and break down into simpler ops.
But this is a good point, specifically regarding OpenCV, at least they can be executed wrapped in tf.numpy_function. For training this is good enough (and part of data preprocessing, so if you don't have accelerate support, it's fine). For serving, I can imagine people have their own solutions for optimization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good. Meanwhile will this discussion alternate the proposal?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It Is hard to say cause I am not in the position to allocate TF teams resources in a specific direction. 😉 But It would be nice if you will find, as a team, an internal consesus and resources to bootstrap an MVP on this (if It really makes any sense to create a CV dialect)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also as you know by some concrete example we had on different tickets coordination It Is hard in the Wild with a classical approach.
E.g. Just to mention something old and fresh at the same time tensorflow/addons#914 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Specifically related to this package being pure python
? (keras_cv is on top of Keras IMO)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Related to your mentioned NMS postprocessor
and iter future candidate operators like this one. Being python only doesn't solve the runtime topic about trasformation and compilers.
Where keras-cv could run? Can run only on targets with python interpreter.
IMHO It Is a Little bit like when we used PIL preprocessing in Keras instead of rely on TF ops like in the new preprocessing. It Is not really the case of NMS cause we are now at v3 impl in Tensorflow. I picked this just cause you mentioned as an example.
EDIT:
In the TF (MLIR) dialect NMS is at v5 https://www.tensorflow.org/mlir/tf_ops?hl=en#tfnonmaxsuppressionv5_tfnonmaxsuppressionv5op
- Components from addons will graduate to Model Garden, given it incurs more usage, | ||
and it works in CPU/GPU/TPU. The API interface will remain experimental after graduation. | ||
|
||
## Boundaries between keras-cv and Model Garden | ||
|
||
- End to end modeling workflow and model specific details live in Model Garden | ||
- Model garden will re-use most of the building blocks from keras-cv | ||
- Components from Model Garden can graduate to keras-cv, given it is widely accepted, | ||
it works performant in CPU/GPU/TPU. The API interface should remain stable after graduation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tanzhenyu per discussion at the SIG Addons meeting could we update this to say modular components should not be housed in model garden, and that with sufficient usage/reliability Addons components will be graduated to keras/tf-core as needed/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@tanzhenyu Can you explain why we have |
https://github.com/keras-team/keras-cv/blob/master/keras_cv/contributing.md |
So if I translate correctly It Is duplicated untill this repository will have Google CI infrastrutture. Right? |
Yes + when we finalize the API-level RFC |
Is that why, in the original version of the RFC, you proposed to upstream Addons in Model Garden? |
We need both: 1) API-level RFC, 2) CI infrastructure. The readonly repo is simply for those that want to read the code but don't want the trouble to finding that through Model Garden, and a general impression that this will soon be the only repo. |
No description provided.