-
Notifications
You must be signed in to change notification settings - Fork 27
Outstanding concept, difficult to start using #409
Comments
Hello @hjalmarlucius, thank you for opening this issue and your interested in the project! I can add binaries for the python and cuda version you're asking for. Indeed there's currently no nightlies that match that spec. Currently the tensors cannot be mutated over time via something like "append". But you could use "cat" or "stack" to expand a NestedTensor (with the obvious performance penalty of memory allocation / deallocation). Also, for full transparency, note that it currently does not support autograd. |
Thanks for the awesome work! Any update here about binaries for cuda 11 (on any python 3.7+)? Our ColBERT work and its many derivatives in IR and NLP will benefit dramatically from this! |
Hello @okhat, thank you for your interest! I've unfortunately not had the chance to work on this. Do you need autograd for your project? |
@cpuhrsch I don't need autograd! I just need fast inference with variable-sized matrics. I'm trying to install on Linux + cuda 11 and to my understanding there's only support for macosx? https://download.pytorch.org/whl/nightly/cu111/torch_nightly.html FYI: so far we've rolled our own nested tensor implementation around torch.as_strided. But it's neither convenient nor as fast as we'd like :-) |
Would it be easy to build manually using setup.py? I'm happy to try that — would appreciate if you could know if there are specific dependencies (or other instructions) to keep in mind! |
@okhat - installing from source would indeed be easier for now. I'd recommend the following command
You don't need any dependencies other than a recent nightly Keep in mind that NestedTensor operations need to be executed with the inference_mode context. The CI lags the most recent nightly so there might be some build errors, please let me know if there are and we'll resolve them. |
I'm able to build and use it without issues! Many thanks @cpuhrsch! I'll explore efficiency and portability. In particular, I hope this would be much faster than naive use of torch.as_strided and that it'll be easy enough to make sure our users don't necessarily have to compile from source [or at least don't have to think about it]. Happy to open a new issue if needed, but a couple of quick question if that's okay:
The most efficient way I can think of here is to use torch.as_strided, create a contiguous copy of the tensor, mask out the padding spaces, then use nested_tensor_from_tensor_mask. But that seems pretty expensive! |
Happy to hear you got it to work!
Could you give an example of the expected behavior here?
No there is not, but there really should be. In particular this constructor could be |
The main reason I want to use nestedtensor and what value I want it to add
The features I wish nestedtensor had
t
) at all times.The things about nestedtensor that frustrate me
[Optional] Example code or project I want to integrate with nestedtensor
The text was updated successfully, but these errors were encountered: