This project is no longer actively maintained. While existing releases remain available, there are no planned updates, bug fixes, new features, or security patches. Users should be aware that vulnerabilities may not be addressed.
TorchServe provides following inference handlers out of box. It's expected that the models consumed by each support batched inference.
- Description : Handles image classification models trained on the ImageNet dataset.
- Input : RGB image
- Output : Batch of top 5 predictions and their respective probability of the image
For more details see examples
- Description : Handles image segmentation models trained on the ImageNet dataset.
- Input : RGB image
- Output : Output shape as [N, CL H W], N - batch size, CL - number of classes, H - height and W - width.
For more details see examples
- Description : Handles object detection models.
- Input : RGB image
- Output : Batch of lists of detected classes and bounding boxes respectively
Note : We recommend running torchvision>0.6
otherwise the object_detector default handler will only run on the default GPU device
For more details see examples
- Description : Handles models trained on the AG_NEWS dataset.
- Input : text file
- Output : Class of input text. (No batching supported)
For more details see examples
For a more comprehensive list of available handlers make sure to check out the examples page
image_classifier
, text_classifier
and object_detector
can all automatically map from numeric classes (0,1,2...) to friendly strings. To do this, simply include in your model archive a file, index_to_name.json
, that contains a mapping of class number (as a string) to friendly name (also as a string). You can see some examples here:
We welcome new contributed handlers, if your usecase isn't covered by one of the existing default handlers please follow the below steps to contribute it
- Write a new class derived from BaseHandler. Add it as a separate file in
ts/torch_handler/
- Update
model-archiver/model_packaging.py
to add in your classes name - Run and update the unit tests in unit_tests. As always, make sure to run torchserve_sanity.py before submitting.