Clone the PaddleOCR repository, use the main branch, and install it. Since the PaddleOCR repository is relatively large and cloning via git clone can be slow, this tutorial has already downloaded it.
Paddle2ONNX supports converting models in the PaddlePaddle format to the ONNX format. Operators currently stably support exporting ONNX Opset versions 9~18, and some Paddle operators support conversion to lower ONNX Opsets. For more details, please refer to Paddle2ONNX.
There are two ways to obtain Paddle static graph models: download the prediction models provided by PaddleOCR in the model list; or refer to the Model Export Instructions to convert trained weights into inference models.
Using the PP-OCR series English detection, recognition, and classification models as examples:
After execution, the ONNX models will be saved respectively under ./inference/det_onnx/, ./inference/rec_onnx/, and ./inference/cls_onnx/.
Note: For OCR models, dynamic shapes must be used during conversion; otherwise, the prediction results may slightly differ from directly using Paddle for prediction. Additionally, the following models currently do not support conversion to ONNX models: NRTR, SAR, RARE, SRN.
Note: After Paddle2ONNX version v1.2.3, dynamic shapes are supported by default, i.e., float32[p2o.DynamicDimension.0,3,p2o.DynamicDimension.1,p2o.DynamicDimension.2]. The option --input_shape_dict has been deprecated. If you need to adjust shapes, you can use the following command to adjust the input shape of the Paddle model.
After executing the command, the terminal will print out the predicted recognition information, and the visualization results will be saved under ./inference_results/.
ONNXRuntime Execution Result:
Paddle Inference Execution Result:
Using ONNXRuntime for prediction, terminal output: