Skip to content

Commit bd7caa1

Browse files
Add PaddleClas infer.py (PaddlePaddle#107)
* Update README.md * Update README.md * Update README.md * Create README.md * Update README.md * Update README.md * Update README.md * Update README.md * Add evaluation calculate time and fix some bugs * Update classification __init__ * Move to ppseg * Add segmentation doc * Add PaddleClas infer.py * Update PaddleClas infer.py * Delete .infer.py.swp Co-authored-by: Jason <[email protected]>
1 parent 0e73c89 commit bd7caa1

File tree

7 files changed

+404
-4
lines changed

7 files changed

+404
-4
lines changed

examples/vision/classification/paddleclas/python/infer.py

+12-4
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
import fastdeploy as fd
22
import cv2
3+
import os
34

45

56
def parse_arguments():
@@ -9,7 +10,9 @@ def parse_arguments():
910
parser.add_argument(
1011
"--model", required=True, help="Path of PaddleClas model.")
1112
parser.add_argument(
12-
"--image", required=True, help="Path of test image file.")
13+
"--image", type=str, required=True, help="Path of test image file.")
14+
parser.add_argument(
15+
"--topk", type=int, default=1, help="Return topk results.")
1316
parser.add_argument(
1417
"--device",
1518
type=str,
@@ -31,17 +34,22 @@ def build_option(args):
3134

3235
if args.use_trt:
3336
option.use_trt_backend()
34-
option.set_trt_input_shape("images", [1, 3, 640, 640])
37+
option.set_trt_input_shape("inputs", [1, 3, 224, 224],
38+
[1, 3, 224, 224], [1, 3, 224, 224])
3539
return option
3640

3741

3842
args = parse_arguments()
3943

4044
# 配置runtime,加载模型
4145
runtime_option = build_option(args)
42-
model = fd.vision.classification.PaddleClasModel(args.model, runtime_option=runtime_option)
46+
model_file = os.path.join(args.model, "inference.pdmodel")
47+
params_file = os.path.join(args.model, "inference.pdiparams")
48+
config_file = os.path.join(args.model, "inference_cls.yaml")
49+
model = fd.vision.classification.PaddleClasModel(
50+
model_file, params_file, config_file, runtime_option=runtime_option)
4351

4452
# 预测图片分类结果
4553
im = cv2.imread(args.image)
46-
result = model.predict(im)
54+
result = model.predict(im, args.topk)
4755
print(result)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
# PaddleClas 模型部署
2+
3+
## 模型版本说明
4+
5+
- [PaddleClas Release/2.4](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4)
6+
7+
目前FastDeploy支持如下模型的部署
8+
9+
- [PP-LCNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNet.md)
10+
- [PP-LCNetV2系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-LCNetV2.md)
11+
- [EfficientNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/EfficientNet_and_ResNeXt101_wsl.md)
12+
- [GhostNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
13+
- [MobileNet系列模型(包含v1,v2,v3)](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
14+
- [ShuffleNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Mobile.md)
15+
- [SqueezeNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Others.md)
16+
- [Inception系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/Inception.md)
17+
- [PP-HGNet系列模型](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/models/PP-HGNet.md)
18+
- [ResNet系列模型(包含vd系列)](https://github.com/PaddlePaddle/PaddleClas/blob/develop/docs/zh_CN/models/ResNet_and_vd.md)
19+
20+
## 准备PaddleClas部署模型
21+
22+
PaddleClas模型导出,请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
23+
24+
注意:PaddleClas导出的模型仅包含`inference.pdmodel``inference.pdiparams`两个文档,但为了满足部署的需求,同时也需准备其提供的通用[inference_cls.yaml](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/deploy/configs/inference_cls.yaml)文件,FastDeploy会从yaml文件中获取模型在推理时需要的预处理信息,开发者可直接下载此文件使用。但需根据自己的需求修改yaml文件中的配置参数,具体可比照PaddleClas模型训练[config](https://github.com/PaddlePaddle/PaddleClas/tree/release/2.4/ppcls/configs/ImageNet)中的infer部分的配置信息进行修改。
25+
26+
27+
## 下载预训练模型
28+
29+
为了方便开发者的测试,下面提供了PaddleClas导出的部分模型(含inference_cls.yaml文件),开发者可直接下载使用。
30+
31+
| 模型 | 参数文件大小 |输入Shape | Top1 | Top5 |
32+
|:---------------------------------------------------------------- |:----- |:----- | :----- | :----- |
33+
| [PPLCNet_x1_0](https://bj.bcebos.com/paddlehub/fastdeploy/PPLCNet_x1_0_infer.tgz) | 12MB | 224x224 |71.32% | 90.03% |
34+
| [PPLCNetV2_base](https://bj.bcebos.com/paddlehub/fastdeploy/PPLCNetV2_base_infer.tgz) | 26MB | 224x224 |77.04% | 93.27% |
35+
| [EfficientNetB7](https://bj.bcebos.com/paddlehub/fastdeploy/EfficientNetB7_infer.tgz) | 255MB | 600x600 | 84.3% | 96.9% |
36+
| [EfficientNetB0_small](https://bj.bcebos.com/paddlehub/fastdeploy/EfficientNetB0_small_infer.tgz)| 18MB | 224x224 | 75.8% | 75.8% |
37+
| [GhostNet_x1_3_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/GhostNet_x1_3_ssld_infer.tgz) | 29MB | 224x224 | 75.7% | 92.5% |
38+
| [GhostNet_x0_5_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/GhostNet_x0_5_infer.tgz) | 10MB | 224x224 | 66.8% | 86.9% |
39+
| [MobileNetV1_x0_25](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV1_x0_25_infer.tgz) | 1.9MB | 224x224 | 51.4% | 75.5% |
40+
| [MobileNetV1_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV1_ssld_infer.tgz) | 17MB | 224x224 | 77.9% | 93.9% |
41+
| [MobileNetV2_x0_25](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV2_x0_25_infer.tgz) | 5.9MB | 224x224 | 53.2% | 76.5% |
42+
| [MobileNetV2_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV2_ssld_infer.tgz) | 14MB | 224x224 | 76.74% | 93.39% |
43+
| [MobileNetV3_small_x0_35_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV3_small_x0_35_ssld_infer.tgz) | 6.4MB | 224x224 | 55.55% | 77.71% |
44+
| [MobileNetV3_large_x1_0_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/MobileNetV3_large_x1_0_ssld_infer.tgz) | 22MB | 224x224 | 78.96% | 94.48% |
45+
| [ShuffleNetV2_x0_25](https://bj.bcebos.com/paddlehub/fastdeploy/ShuffleNetV2_x0_25_infer.tgz) | 2.4MB | 224x224 | 49.9% | 73.79% |
46+
| [ShuffleNetV2_x2_0](https://bj.bcebos.com/paddlehub/fastdeploy/ShuffleNetV2_x2_0_infer.tgz) | 29MB | 224x224 | 73.15% | 91.2% |
47+
| [SqueezeNet1_1](https://bj.bcebos.com/paddlehub/fastdeploy/SqueezeNet1_1_infer.tgz) | 4.8MB | 224x224 | 60.1% | 81.9% |
48+
| [InceptionV3](https://bj.bcebos.com/paddlehub/fastdeploy/InceptionV3_infer.tgz) | 92MB | 299x299 | 79.14% | 94.59% |
49+
| [PPHGNet_tiny_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/PPHGNet_tiny_ssld_infer.tgz) | 57MB | 224x224 | 81.95% | 96.12% |
50+
| [PPHGNet_base_ssld](https://bj.bcebos.com/paddlehub/fastdeploy/PPHGNet_base_ssld_infer.tgz) | 274MB | 224x224 | 85.0% | 97.35% |
51+
| [ResNet50_vd](https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz) | 98MB | 224x224 | 79.12% | 94.44% |
52+
53+
## 详细部署文档
54+
55+
- [Python部署](python)
56+
- [C++部署](cpp)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
PROJECT(infer_demo C CXX)
2+
CMAKE_MINIMUM_REQUIRED (VERSION 3.12)
3+
4+
# 指定下载解压后的fastdeploy库路径
5+
option(FASTDEPLOY_INSTALL_DIR "Path of downloaded fastdeploy sdk.")
6+
7+
include(${FASTDEPLOY_INSTALL_DIR}/FastDeploy.cmake)
8+
9+
# 添加FastDeploy依赖头文件
10+
include_directories(${FASTDEPLOY_INCS})
11+
12+
add_executable(infer_demo ${PROJECT_SOURCE_DIR}/infer.cc)
13+
# 添加FastDeploy库依赖
14+
target_link_libraries(infer_demo ${FASTDEPLOY_LIBS})
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,77 @@
1+
# YOLOv7 C++部署示例
2+
3+
本目录下提供`infer.cc`快速完成YOLOv7在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。
4+
5+
在部署前,需确认以下两个步骤
6+
7+
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
8+
- 2. 根据开发环境,下载预编译部署库和samples代码,参考[FastDeploy预编译库](../../../../../docs/compile/prebuilt_libraries.md)
9+
10+
以Linux上CPU推理为例,在本目录执行如下命令即可完成编译测试
11+
12+
```
13+
mkdir build
14+
cd build
15+
wget https://xxx.tgz
16+
tar xvf fastdeploy-linux-x64-0.2.0.tgz
17+
cmake .. -DFASTDEPLOY_INSTALL_DIR=${PWD}/fastdeploy-linux-x64-0.2.0
18+
make -j
19+
20+
#下载官方转换好的yolov7模型文件和测试图片
21+
wget https://bj.bcebos.com/paddlehub/fastdeploy/yolov7.onnx
22+
wget https://gitee.com/paddlepaddle/PaddleDetection/raw/release/2.4/demo/000000087038.jpg
23+
24+
25+
# CPU推理
26+
./infer_demo yolov7.onnx 000000087038.jpg 0
27+
# GPU推理
28+
./infer_demo yolov7.onnx 000000087038.jpg 1
29+
# GPU上TensorRT推理
30+
./infer_demo yolov7.onnx 000000087038.jpg 2
31+
```
32+
33+
## YOLOv7 C++接口
34+
35+
### YOLOv7类
36+
37+
```
38+
fastdeploy::vision::detection::YOLOv7(
39+
const string& model_file,
40+
const string& params_file = "",
41+
const RuntimeOption& runtime_option = RuntimeOption(),
42+
const Frontend& model_format = Frontend::ONNX)
43+
```
44+
45+
YOLOv7模型加载和初始化,其中model_file为导出的ONNX模型格式。
46+
47+
**参数**
48+
49+
> * **model_file**(str): 模型文件路径
50+
> * **params_file**(str): 参数文件路径,当模型格式为ONNX时,此参数传入空字符串即可
51+
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
52+
> * **model_format**(Frontend): 模型格式,默认为ONNX格式
53+
54+
#### Predict函数
55+
56+
> ```
57+
> YOLOv7::Predict(cv::Mat* im, DetectionResult* result,
58+
> float conf_threshold = 0.25,
59+
> float nms_iou_threshold = 0.5)
60+
> ```
61+
>
62+
> 模型预测接口,输入图像直接输出检测结果。
63+
>
64+
> **参数**
65+
>
66+
> > * **im**: 输入图像,注意需为HWC,BGR格式
67+
> > * **result**: 检测结果,包括检测框,各个框的置信度, DetectionResult说明参考[视觉模型预测结果](../../../../../docs/api/vision_results/)
68+
> > * **conf_threshold**: 检测框置信度过滤阈值
69+
> > * **nms_iou_threshold**: NMS处理过程中iou阈值
70+
71+
### 类成员变量
72+
73+
> > * **size**(vector<int>): 通过此参数修改预处理过程中resize的大小,包含两个整型元素,表示[width, height], 默认值为[640, 640]
74+
75+
- [模型介绍](../../)
76+
- [Python部署](../python)
77+
- [视觉模型预测结果](../../../../../docs/api/vision_results/)
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
// Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
2+
//
3+
// Licensed under the Apache License, Version 2.0 (the "License");
4+
// you may not use this file except in compliance with the License.
5+
// You may obtain a copy of the License at
6+
//
7+
// http://www.apache.org/licenses/LICENSE-2.0
8+
//
9+
// Unless required by applicable law or agreed to in writing, software
10+
// distributed under the License is distributed on an "AS IS" BASIS,
11+
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
// See the License for the specific language governing permissions and
13+
// limitations under the License.
14+
15+
#include "fastdeploy/vision.h"
16+
17+
void CpuInfer(const std::string& model_file, const std::string& params_file,
18+
const std::string& config_file, const std::string& image_file) {
19+
auto option = fastdeploy::RuntimeOption();
20+
option.UseCpu() auto model =
21+
fastdeploy::vision::classification::PaddleClasModel(
22+
model_file, params_file, config_file, option);
23+
if (!model.Initialized()) {
24+
std::cerr << "Failed to initialize." << std::endl;
25+
return;
26+
}
27+
28+
auto im = cv::imread(image_file);
29+
30+
fastdeploy::vision::ClassifyResult res;
31+
if (!model.Predict(&im, &res)) {
32+
std::cerr << "Failed to predict." << std::endl;
33+
return;
34+
}
35+
36+
// print res
37+
res.Str();
38+
}
39+
40+
void GpuInfer(const std::string& model_file, const std::string& params_file,
41+
const std::string& config_file, const std::string& image_file) {
42+
auto option = fastdeploy::RuntimeOption();
43+
option.UseGpu();
44+
auto model = fastdeploy::vision::classification::PaddleClasModel(
45+
model_file, params_file, config_file, option);
46+
if (!model.Initialized()) {
47+
std::cerr << "Failed to initialize." << std::endl;
48+
return;
49+
}
50+
51+
auto im = cv::imread(image_file);
52+
53+
fastdeploy::vision::ClassifyResult res;
54+
if (!model.Predict(&im, &res)) {
55+
std::cerr << "Failed to predict." << std::endl;
56+
return;
57+
}
58+
59+
// print res
60+
res.Str();
61+
}
62+
63+
void TrtInfer(const std::string& model_file, const std::string& params_file,
64+
const std::string& config_file, const std::string& image_file) {
65+
auto option = fastdeploy::RuntimeOption();
66+
option.UseGpu();
67+
option.UseTrtBackend();
68+
option.SetTrtInputShape("inputs", [ 1, 3, 224, 224 ], [ 1, 3, 224, 224 ],
69+
[ 1, 3, 224, 224 ]);
70+
auto model = fastdeploy::vision::classification::PaddleClasModel(
71+
model_file, params_file, config_file, option);
72+
if (!model.Initialized()) {
73+
std::cerr << "Failed to initialize." << std::endl;
74+
return;
75+
}
76+
77+
auto im = cv::imread(image_file);
78+
79+
fastdeploy::vision::ClassifyResult res;
80+
if (!model.Predict(&im, &res)) {
81+
std::cerr << "Failed to predict." << std::endl;
82+
return;
83+
}
84+
85+
// print res
86+
res.Str();
87+
}
88+
89+
int main(int argc, char* argv[]) {
90+
if (argc < 4) {
91+
std::cout << "Usage: infer_demo path/to/model path/to/image run_option, "
92+
"e.g ./infer_demo ./ResNet50_vd ./test.jpeg 0"
93+
<< std::endl;
94+
std::cout << "The data type of run_option is int, 0: run with cpu; 1: run "
95+
"with gpu; 2: run with gpu and use tensorrt backend."
96+
<< std::endl;
97+
return -1;
98+
}
99+
100+
std::string model_file =
101+
argv[1] + "/" + "model.pdmodel" std::string params_file =
102+
argv[1] + "/" + "model.pdiparams" std::string config_file =
103+
argv[1] + "/" + "inference_cls.yaml" std::string image_file =
104+
argv[2] if (std::atoi(argv[3]) == 0) {
105+
CpuInfer(model_file, params_file, config_file, image_file);
106+
}
107+
else if (std::atoi(argv[3]) == 1) {
108+
GpuInfer(model_file, params_file, config_file, image_file);
109+
}
110+
else if (std::atoi(argv[3]) == 2) {
111+
TrtInfer(model_file, params_file, config_file, image_file);
112+
}
113+
return 0;
114+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
# PaddleClas模型 Python部署示例
2+
3+
在部署前,需确认以下两个步骤
4+
5+
- 1. 软硬件环境满足要求,参考[FastDeploy环境要求](../../../../../docs/quick_start/requirements.md)
6+
- 2. FastDeploy Python whl包安装,参考[FastDeploy Python安装](../../../../../docs/quick_start/install.md)
7+
8+
本目录下提供`infer.py`快速完成ResNet50_vd在CPU/GPU,以及GPU上通过TensorRT加速部署的示例。执行如下脚本即可完成
9+
10+
```
11+
# 下载ResNet50_vd模型文件和测试图片
12+
wget https://bj.bcebos.com/paddlehub/fastdeploy/ResNet50_vd_infer.tgz
13+
tar -xvf ResNet50_vd_infer.tgz
14+
wget https://gitee.com/paddlepaddle/PaddleClas/raw/release/2.4/deploy/images/ImageNet/ILSVRC2012_val_00000010.jpeg
15+
16+
17+
#下载部署示例代码
18+
git clone https://github.com/PaddlePaddle/FastDeploy.git
19+
cd examples/vision/classification/paddleclas/python
20+
21+
# CPU推理
22+
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device cpu
23+
# GPU推理
24+
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device gpu
25+
# GPU上使用TensorRT推理 (注意:TensorRT推理第一次运行,有序列化模型的操作,有一定耗时,需要耐心等待)
26+
python infer.py --model ResNet50_vd_infer --image ILSVRC2012_val_00000010.jpeg --device gpu --use_trt True
27+
```
28+
29+
运行完成后返回结果如下所示
30+
```
31+
ClassifyResult(
32+
label_ids: 153,
33+
scores: 0.686229,
34+
)
35+
```
36+
37+
## PaddleClasModel Python接口
38+
39+
```
40+
fd.vision.classification.PaddleClasModel(model_file, params_file, config_file, runtime_option=None, model_format=Frontend.PADDLE)
41+
```
42+
43+
PaddleClas模型加载和初始化,其中model_file, params_file为训练模型导出的Paddle inference文件,具体请参考其文档说明[模型导出](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.4/docs/zh_CN/inference_deployment/export_model.md#2-%E5%88%86%E7%B1%BB%E6%A8%A1%E5%9E%8B%E5%AF%BC%E5%87%BA)
44+
45+
**参数**
46+
47+
> * **model_file**(str): 模型文件路径
48+
> * **params_file**(str): 参数文件路径
49+
> * **config_file**(str): 推理部署配置文件
50+
> * **runtime_option**(RuntimeOption): 后端推理配置,默认为None,即采用默认配置
51+
> * **model_format**(Frontend): 模型格式,默认为Paddle格式
52+
53+
### predict函数
54+
55+
> ```
56+
> PaddleClasModel.predict(input_image, topk=1)
57+
> ```
58+
>
59+
> 模型预测结口,输入图像直接输出检测结果。
60+
>
61+
> **参数**
62+
>
63+
> > * **input_image**(np.ndarray): 输入数据,注意需为HWC,BGR格式
64+
> > * **topk**(int):返回预测概率最高的topk个分类结果
65+
66+
> **返回**
67+
>
68+
> > 返回`fastdeploy.vision.ClassifyResult`结构体,结构体说明参考文档[视觉模型预测结果](../../../../../docs/api/vision_results/)
69+
70+
71+
## 其它文档
72+
73+
- [PaddleClas 模型介绍](..)
74+
- [PaddleClas C++部署](../cpp)
75+
- [模型预测结果说明](../../../../../docs/api/vision_results/)

0 commit comments

Comments
 (0)