Skip to content

Serving

Serving is a common deployment method in real-world production environments. By encapsulating inference capabilities as services, clients can access these services via network requests to obtain inference results. PaddleOCR recommends using PaddleX for serving. Please refer to Differences and Connections between PaddleOCR and PaddleX to understand the relationship between PaddleOCR and PaddleX.

PaddleX provides the following serving solutions:

  • Basic Serving: An easy-to-use serving solution with low development costs.
  • High-Stability Serving: Built based on NVIDIA Triton Inference Server. Compared to the basic serving, this solution offers higher stability and allows users to adjust configurations to optimize performance.

It is recommended to first use the basic serving solution for quick validation, and then evaluate whether to try more complex solutions based on actual needs.

1. Basic Serving

1.1 Install Dependencies

Run the following command to install the PaddleX serving plugin via PaddleX CLI:

paddlex --install serving

1.2 Run the Server

Run the server via PaddleX CLI:

paddlex --serve --pipeline {PaddleX pipeline registration name or pipeline configuration file path} [{other command-line options}]

Take the general OCR pipeline as an example:

paddlex --serve --pipeline OCR

You should see information similar to the following:

INFO:     Started server process [63108]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)

To adjust configurations (such as model path, batch size, deployment device, etc.), specify --pipeline as a custom configuration file. Refer to PaddleOCR and PaddleX for the mapping between PaddleOCR pipelines and PaddleX pipeline registration names, as well as how to obtain and modify PaddleX pipeline configuration files.

The command-line options related to serving are as follows:

Name Description
--pipeline PaddleX pipeline registration name or pipeline configuration file path.
--device Deployment device for the pipeline. Defaults to cpu (if GPU is unavailable) or gpu (if GPU is available).
--host Hostname or IP address to which the server is bound. Defaults to 0.0.0.0.
--port Port number on which the server listens. Defaults to 8080.
--use_hpip If specified, uses high-performance inference.
--hpi_config High-performance inference configuration. Refer to the PaddleX High-Performance Inference Guide for more information.

1.3 Invoke the Service

The "Development Integration/Deployment" section in the PaddleOCR pipeline tutorial provides API references and multi-language invocation examples for the service.

2. High-Stability Serving

Please refer to the PaddleX Serving Guide. More information about PaddleX pipeline configuration files can be found in Using PaddleX Pipeline Configuration Files.

It should be noted that, due to the lack of fine-grained optimization and other reasons, the current high-stability serving deployment solution provided by PaddleOCR may not match the performance of the 2.x version based on PaddleServing. However, this new solution fully supports the PaddlePaddle 3.0 framework. We will continue to optimize it and consider introducing more performant deployment solutions in the future.