-
Notifications
You must be signed in to change notification settings - Fork 615
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement GPU kernels #118
Comments
@gunan @martinwicke Is there anyway we can see how tensorflow/community#2 is progressing? We've been discussing how we would like to package an Addons that includes GPU kernels and I believe we'll be depending on this in order to have a single pip package that loads either gpu or non gpu kernels. Is that correct? Publishing an |
@yifeif and I are working towards this |
Small update: dynamic kernels are now implemented on the tensorflow master branch so we can begin work on this. |
@seanpmorgan Finally got myself an nvidia GPU. I can work on this. |
@gunan Hey, do you have some sample code on how this is supposed to be used? In the RFC, you mention a new API: |
During implementation, we decided to make it more general and name it "load_library" instead. |
@av8ramit Hi, Amit. Can you check the GPU test failures in #294? Which docker image do we use for gpu test? Is it tensorflow/tensorflow:custom-op-gpu image required by https://github.com/tensorflow/custom-op? Thank you. |
Hi @facaiy we use an internal GCP image, but it's similar to the Docker image you listed. |
@yifeif @gunan We're in the process of setting up our GPU env, but it looks as though More information I'm running
I can file an issue on tensorflow/tensorflow if it's just something that needs to be updated in that library. |
@karmel @yifeif @gunan Friendly bump on the above question. We're currently dealing with a failing test case that is only occuring on python2 Why is there still a |
We have dropped support for Python3.4 packages as it is fairly old. @karmel could this issue be solved with upgrading the python3 version to Python3.5? |
Good to know thanks. So we will upgrade our CI scripts to install py36, but for local testing the custom-op docker image we use only has py34 installed (still waiting on a new custom-op which looks like its soon) Is there a reason for the |
Closing this as all kernels are now implemented. I'll create a new issue for discussing how we will package GPU kernels as there is some discussion needed. For record keeping... here are the notes from SIG Build meeitng regarding the 2 tf-core packages:
|
Currently we've omitted the gpu kernels from addons while we got everything setup. Now that CI testing is setup for GPU we should begin adding these to addons. There's a lot of things required so I wanted to start this issue so we can track / discuss.
Some of the TODO:
tensorflow/tensorflow:custom-op-gpu
as the docker imageTF_NEED_CUDA
variable appropriately in our scriptsThe text was updated successfully, but these errors were encountered: