Skip to content

Overview

1. Introduction to All-in-One Development

The All-in-One development tool PaddleX, based on the advanced technology of PaddleOCR, supports low-code full-process development capabilities in the OCR field. Through low-code development, simple and efficient model use, combination, and customization can be achieved. This will significantly reduce the time consumption of model development, lower its development difficulty, and greatly accelerate the application and promotion speed of models in the industry. Features include:

  • 🎨 Rich Model One-Click Call: Integrates 17 models related to text image intelligent analysis, general OCR, general layout parsing, table recognition, formula recognition, and seal recognition into 6 pipelines, which can be quickly experienced through a simple Python API one-click call. In addition, the same set of APIs also supports a total of 200+ models in image classification, object detection, image segmentation, and time series forcasting, forming 20+ single-function modules, making it convenient for developers to use model combinations.

  • 🚀 High Efficiency and Low barrier of entry: Provides two methods based on unified commands and GUI to achieve simple and efficient use, combination, and customization of models. Supports multiple deployment methods such as high-performance inference, service-oriented deployment, and edge deployment. Additionally, for various mainstream hardware such as NVIDIA GPU, Kunlunxin XPU, Ascend NPU, Cambricon MLU, and Haiguang DCU, models can be developed with seamless switching.

Note: PaddleX is committed to achieving pipeline-level model training, inference, and deployment. A model pipeline refers to a series of predefined development processes for specific AI tasks, including combinations of single models (single-function modules) that can independently complete a type of task.

In PaddleX, all 6 OCR-related pipelines support local inference, and some pipelines support online experience. You can quickly experience the pre-trained model effects of each pipeline. If you are satisfied with the pre-trained model effects of a pipeline, you can directly proceed with high-performance inference/service-oriented deployment/edge deployment. If not satisfied, you can also use the custom development capabilities of the pipeline to improve the effects. For the complete pipeline development process, please refer to PaddleX Pipeline Usage Overview or the tutorials for each pipeline.

In addition, PaddleX provides developers with a full-process efficient model training and deployment tool based on a cloud-based GUI. Developers do not need code development, just need to prepare a dataset that meets the pipeline requirements to quickly start model training. For details, please refer to the tutorial "Developing Industrial-level AI Models with Zero Barrier".

Pipeline Online Experience Local Inference High-Performance Inference Service-Oriented Deployment Edge Deployment Custom Development No-Code Development On AI Studio
OCR Link
PP-ChatOCRv3 Link 🚧
Table Recognition Link 🚧
Layout Parsing 🚧 🚧 🚧 🚧
Formula Recognition 🚧 🚧 🚧 🚧
Seal Recognition 🚧 🚧 🚧

❗Note: The above capabilities are implemented based on GPU/CPU. PaddleX can also perform local inference and custom development on mainstream hardware such as Kunlunxin, Ascend, Cambricon, and Haiguang. The table below details the support status of the pipelines. For specific supported model lists, please refer to the Model List (Kunlunxin XPU)/Model List (Ascend NPU)/Model List (Cambricon MLU)/Model List (Haiguang DCU). We are continuously adapting more models and promoting the implementation of high-performance and service-oriented deployment on mainstream hardware. 🚀 Support for Domestic Hardware Capabilities

Pipeline Name Ascend 910B Kunlunxin XPU Cambricon MLU Haiguang DCU
General OCR 🚧
Table Recognition 🚧 🚧 🚧