Jetson Deployment for PaddleOCR¶
This section introduces the deployment of PaddleOCR on Jetson NX, TX2, nano, AGX and other series of hardware.
1. Prepare Environment¶
You need to prepare a Jetson development hardware. If you need TensorRT, you need to prepare the TensorRT environment. It is recommended to use TensorRT version 7.1.3;
1. Install PaddlePaddle in Jetson¶
The PaddlePaddle download link Please select the appropriate installation package for your Jetpack version, cuda version, and trt version. Here, we download paddlepaddle_gpu-2.3.0rc0-cp36-cp36m-linux_aarch64.whl.
Install PaddlePaddle:
2. Download PaddleOCR code and install dependencies¶
Clone the PaddleOCR code:
and install dependencies:
- Note: Jetson hardware CPU is poor, dependency installation is slow, please wait patiently
2. Perform prediction¶
Obtain the PPOCR model from the document model library. The following takes the PP-OCRv3 model as an example to introduce the use of the PPOCR model on Jetson:
Download and unzip the PP-OCRv3 models.
The text detection inference:
After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the ./inference_results/
directory.
The text recognition inference:
After executing the command, the predicted information will be printed on the terminal, and the output is as follows:
The text detection and text recognition inference:
After executing the command, the predicted information will be printed out in the terminal, and the visualization results will be saved in the ./inference_results/
directory.
To enable TRT prediction, you only need to set --use_tensorrt=True
on the basis of the above command:
For more ppocr model predictions, please refer todocument