Skip to content

PP-ChatOCRv4-doc Pipeline Tutorial

1. Introduction to PP-ChatOCRv4-doc Pipeline

PP-ChatOCRv4-doc is a unique document and image intelligent analysis solution from PaddlePaddle, combining LLM, MLLM, and OCR technologies to address complex document information extraction challenges such as layout analysis, rare characters, multi-page PDFs, tables, and seal recognition. Integrated with ERNIE Bot, it fuses massive data and knowledge, achieving high accuracy and wide applicability. This pipeline also provides flexible service deployment options, supporting deployment on various hardware. Furthermore, it offers custom development capabilities, allowing you to train and fine-tune models on your own datasets, with seamless integration of trained models.

The Document Scene Information Extraction v4 pipeline includes modules for Layout Region Detection, Table Structure Recognition, Table Classification, Table Cell Localization, Text Detection, Text Recognition, Seal Text Detection, Text Image Rectification, and Document Image Orientation Classification. The relevant models are integrated as sub-pipelines, and you can view the model configurations of different modules through the pipeline configuration.

If you prioritize model accuracy, choose a model with higher accuracy. If you prioritize inference speed, select a model with faster inference. If you prioritize model storage size, choose a model with a smaller storage size. Benchmarks for some models are as follows:

👉Model List Details

Table Structure Recognition Module Models:

ModelModel Download Link Accuracy (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Size (M) Description
SLANetInference Model/Training Model 59.52 103.08 / 103.08 197.99 / 197.99 6.9 M SLANet is a table structure recognition model developed by Baidu PaddleX Team. The model significantly improves the accuracy and inference speed of table structure recognition by adopting a CPU-friendly lightweight backbone network PP-LCNet, a high-low-level feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structural and positional information.
SLANet_plusInference Model/Training Model 63.69 140.29 / 140.29 195.39 / 195.39 6.9 M SLANet_plus is an enhanced version of SLANet, the table structure recognition model developed by Baidu PaddleX Team. Compared to SLANet, SLANet_plus significantly improves the recognition ability for wireless and complex tables and reduces the model's sensitivity to the accuracy of table positioning, enabling more accurate recognition even with offset table positioning.

Layout Detection Module Models:

ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-DocLayout-LInference Model/Training Model 90.4 34.6244 / 10.3945 510.57 / - 123.76 M A high-precision layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using RT-DETR-L.
PP-DocLayout-MInference Model/Training Model 75.2 13.3259 / 4.8685 44.0680 / 44.0680 22.578 A layout area localization model with balanced precision and efficiency, trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-L.
PP-DocLayout-SInference Model/Training Model 70.9 8.3008 / 2.3794 10.0623 / 9.9296 4.834 A high-efficiency layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-S.
Note: The evaluation dataset for the above precision metrics is a self-built layout area detection dataset by PaddleOCR, containing 500 common document-type images of Chinese and English papers, magazines, contracts, books, exams, and research reports. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. > ❗ The above list includes the 3 core models that are key supported by the text recognition module. The module actually supports a total of 11 full models, including several predefined models with different categories. The complete model list is as follows: * Table Layout Detection Model
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet_layout_1x_tableInference Model/Training Model 97.5 8.02 / 3.09 23.70 / 20.41 7.4 M A high-efficiency layout area localization model trained on a self-built dataset using PicoDet-1x, capable of detecting table regions.
Note: The evaluation dataset for the above precision metrics is a self-built layout table area detection dataset by PaddleOCR, containing 7835 Chinese and English document images with tables. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. * 3-Class Layout Detection Model, including Table, Image, and Stamp
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet-S_layout_3clsInference Model/Training Model 88.2 8.99 / 2.22 16.11 / 8.73 4.8 A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S.
PicoDet-L_layout_3clsInference Model/Training Model 89.0 13.05 / 4.50 41.30 / 41.30 22.6 A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L.
RT-DETR-H_layout_3clsInference Model/Training Model 95.8 114.93 / 27.71 947.56 / 947.56 470.1 A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H.
Note: The evaluation dataset for the above precision metrics is a self-built layout area detection dataset by PaddleOCR, containing 1154 common document images of Chinese and English papers, magazines, and research reports. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. * 5-Class English Document Area Detection Model, including Text, Title, Table, Image, and List
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet_layout_1xInference Model/Training Model 97.8 9.03 / 3.10 25.82 / 20.70 7.4 A high-efficiency English document layout area localization model trained on the PubLayNet dataset using PicoDet-1x.
Note: The evaluation dataset for the above precision metrics is the [PubLayNet](https://developer.ibm.com/exchanges/data/all/publaynet/) dataset, containing 11245 English document images. GPU inference time is based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speed is based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision. * 17-Class Area Detection Model, including 17 common layout categories: Paragraph Title, Image, Text, Number, Abstract, Content, Figure Caption, Formula, Table, Table Caption, References, Document Title, Footnote, Header, Algorithm, Footer, and Stamp
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PicoDet-S_layout_17clsInference Model/Training Model 87.4 9.11 / 2.12 15.42 / 9.12 4.8 A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S.
PicoDet-L_layout_17clsInference Model/Training Model 89.0 13.50 / 4.69 43.32 / 43.32 22.6 A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L.
RT-DETR-H_layout_17clsInference Model/Training Model 98.3 115.29 / 104.09 995.27 / 995.27 470.2 A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H.

Text Detection Module Models:

ModelModel Download Link Detection Hmean (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Size (M) Description
PP-OCRv4_server_detInference Model/Training Model 82.69 83.34 / 80.91 442.58 / 442.58 109 PP-OCRv4's server-side text detection model, featuring higher accuracy, suitable for deployment on high-performance servers
PP-OCRv4_mobile_detInference Model/Training Model 77.79 8.79 / 3.13 51.00 / 28.58 4.7 PP-OCRv4's mobile text detection model, optimized for efficiency, suitable for deployment on edge devices

Text Recognition Module Models:

ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Size (M) Description
PP-OCRv4_mobile_recInference Model/Training Model 78.20 4.82 / 4.82 16.74 / 4.64 10.6 M PP-OCRv4 is the next version of Baidu PaddlePaddle's self-developed text recognition model PP-OCRv3. By introducing data augmentation schemes and GTC-NRTR guidance branches, it further improves text recognition accuracy without compromising inference speed. The model offers both server (server) and mobile (mobile) versions to meet industrial needs in different scenarios.
PP-OCRv4_server_recInference Model/Training Model 79.20 6.58 / 6.58 33.17 / 33.17 71.2 M
ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Size (M) Description
ch_SVTRv2_recInference Model/Training Model 68.81 8.08 / 8.08 50.17 / 42.50 73.9 M SVTRv2 is a server-side text recognition model developed by the OpenOCR team at the Vision and Learning Lab (FVL) of Fudan University. It won the first prize in the OCR End-to-End Recognition Task of the PaddleOCR Algorithm Model Challenge, with a 6% improvement in end-to-end recognition accuracy compared to PP-OCRv4 on the A-list.
ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Size (M) Description
ch_RepSVTR_recInference Model/Training Model 65.07 5.93 / 5.93 20.73 / 7.32 22.1 M The RepSVTR text recognition model is a mobile-oriented text recognition model based on SVTRv2. It won the first prize in the OCR End-to-End Recognition Task of the PaddleOCR Algorithm Model Challenge, with a 2.5% improvement in end-to-end recognition accuracy compared to PP-OCRv4 on the B-list, while maintaining similar inference speed.

Formula Recognition Module Models:

Model NameModel Download Link BLEU Score Normed Edit Distance ExpRate (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Size
LaTeX_OCR_recInference Model/Training Model 0.8821 0.0823 40.01 2047.13 / 2047.13 10582.73 / 10582.73 89.7 M

Seal Text Detection Module Models:

ModelModel Download Link Detection Hmean (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Size (M) Description
PP-OCRv4_server_seal_detInference Model/Training Model 98.21 74.75 / 67.72 382.55 / 382.55 109 PP-OCRv4's server-side seal text detection model, featuring higher accuracy, suitable for deployment on better-equipped servers
PP-OCRv4_mobile_seal_detInference Model/Training Model 96.47 7.82 / 3.09 48.28 / 23.97 4.6 PP-OCRv4's mobile seal text detection model, offering higher efficiency, suitable for deployment on edge devices
Test Environment Description:
  • Performance Test Environment
    • Test Dataset:
      • Text Image Rectification Model: DocUNet
      • Layout Region Detection Model: A self-built layout analysis dataset using PaddleOCR, containing 10,000 images of common document types such as Chinese and English papers, magazines, and research reports.
      • Table Structure Recognition Model: A self-built English table recognition dataset using PaddleX.
      • Text Detection Model: A self-built Chinese dataset using PaddleOCR, covering multiple scenarios such as street scenes, web images, documents, and handwriting, with 500 images for detection.
      • Chinese Recognition Model: A self-built Chinese dataset using PaddleOCR, covering multiple scenarios such as street scenes, web images, documents, and handwriting, with 11,000 images for text recognition.
      • ch_SVTRv2_rec: Evaluation set A for "OCR End-to-End Recognition Task" in the PaddleOCR Algorithm Model Challenge
      • ch_RepSVTR_rec: Evaluation set B for "OCR End-to-End Recognition Task" in the PaddleOCR Algorithm Model Challenge
      • English Recognition Model: A self-built English dataset using PaddleX.
      • Multilingual Recognition Model: A self-built multilingual dataset using PaddleX.
      • Text Line Orientation Classification Model: A self-built dataset using PaddleX, covering various scenarios such as ID cards and documents, containing 1000 images.
      • Seal Text Detection Model: A self-built dataset using PaddleX, containing 500 images of circular seal textures.
    • Hardware Configuration:
      • GPU: NVIDIA Tesla T4
      • CPU: Intel Xeon Gold 6271C @ 2.60GHz
      • Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
  • Inference Mode Description
Mode GPU Configuration CPU Configuration Acceleration Technology Combination
Normal Mode FP32 Precision / No TRT Acceleration FP32 Precision / 8 Threads PaddleInference
High-Performance Mode Optimal combination of pre-selected precision types and acceleration strategies FP32 Precision / 8 Threads Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.)

2. Quick Start

The pre-trained pipelines provided by PaddleX allow for quick experience of their effects. You can locally use Python to experience the effects of the PP-ChatOCRv4-doc pipeline.

2.1 Local Experience

Before using the PP-ChatOCRv4-doc pipeline locally, ensure you have completed the installation of the PaddleX wheel package according to the PaddleX Local Installation Tutorial. If you wish to selectively install dependencies, please refer to the relevant instructions in the installation guide. The dependency group corresponding to this pipeline is ie.

Before performing model inference, you first need to prepare the API key for the large language model. PP-ChatOCRv4 supports large model services on the Baidu Cloud Qianfan Platform or the locally deployed standard OpenAI interface. If using the Baidu Cloud Qianfan Platform, refer to Authentication and Authorization to obtain the API key. If using a locally deployed large model service, refer to the PaddleNLP Large Model Deployment Documentation for deployment of the dialogue interface and vectorization interface for large models, and fill in the corresponding base_url and api_key. If you need to use a multimodal large model for data fusion, refer to the OpenAI service deployment in the PaddleMIX Model Documentation for multimodal large model deployment, and fill in the corresponding base_url and api_key.

After updating the configuration file, you can complete quick inference using just a few lines of Python code. You can use the test file for testing:

Note: If local deployment of a multimodal large model is restricted due to the local environment, you can comment out the lines containing the mllm variable in the code and only use the large language model for information extraction.

from paddlex import create_pipeline

chat_bot_config = {
    "module_name": "chat_bot",
    "model_name": "ernie-3.5-8k",
    "base_url": "https://qianfan.baidubce.com/v2",
    "api_type": "openai",
    "api_key": "api_key",  # your api_key
}

retriever_config = {
    "module_name": "retriever",
    "model_name": "embedding-v1",
    "base_url": "https://qianfan.baidubce.com/v2",
    "api_type": "qianfan",
    "api_key": "api_key",  # your api_key
}

mllm_chat_bot_config = {
    "module_name": "chat_bot",
    "model_name": "PP-DocBee2",
    "base_url": "http://172.0.0.1:8080/v1/chat/completions",  # your local mllm service url
    "api_type": "openai",
    "api_key": "api_key",  # your api_key
}

pipeline = create_pipeline(pipeline="PP-ChatOCRv4-doc", initial_predictor=False)

visual_predict_res = pipeline.visual_predict(
    input="vehicle_certificate-1.png",
    use_doc_orientation_classify=False,
    use_doc_unwarping=False,
    use_common_ocr=True,
    use_seal_recognition=True,
    use_table_recognition=True,
)

visual_info_list = []
for res in visual_predict_res:
    visual_info_list.append(res["visual_info"])
    layout_parsing_result = res["layout_parsing_result"]

vector_info = pipeline.build_vector(
    visual_info_list, flag_save_bytes_vector=True, retriever_config=retriever_config
)
mllm_predict_res = pipeline.mllm_pred(
    input="vehicle_certificate-1.png",
    key_list=["驾驶室准乘人数"],
    mllm_chat_bot_config=mllm_chat_bot_config,
)
mllm_predict_info = mllm_predict_res["mllm_res"]
chat_result = pipeline.chat(
    key_list=["驾驶室准乘人数"],
    visual_info=visual_info_list,
    vector_info=vector_info,
    mllm_predict_info=mllm_predict_info,
    chat_bot_config=chat_bot_config,
    retriever_config=retriever_config,
)
print(chat_result)

After running, the output result is as follows:

{'chat_res': {'驾驶室准乘人数': '2'}}

PP-ChatOCRv4 Prediction Process, API Description, and Output Description:

(1) Instantiate the PP-ChatOCRv4 Pipeline Object by Calling the create_pipeline Method. The following are the parameter descriptions:
Parameter Parameter Description Parameter Type Default Value
pipeline The name of the pipeline or the path to the pipeline configuration file. If it is the name of the pipeline, it must be a pipeline supported by PaddleX. str None
device The device for pipeline inference. Supports specifying specific GPU card numbers, such as "gpu:0", other hardware card numbers, such as "npu:0", and CPU as "cpu". str gpu
use_hpip Whether to enable the high-performance inference plugin. If set to None, the setting from the configuration file or config will be used. bool None None
hpi_config High-performance inference configuration dict | None None None
initial_predictor Whether to initialize the inference module (if False, it will be initialized when the relevant inference module is used for the first time). bool True
(2) Call the visual_predict() Method of the PP-ChatOCRv4 Pipeline Object to Obtain Visual Prediction Results. This method returns a generator. The following are the parameters and descriptions of the `visual_predict()` method:
Parameter Parameter Description Parameter Type Options Default Value
input The data to be predicted, supporting multiple input types, required. Python Var|str|list
  • Python Var: Such as numpy.ndarray representing image data.
  • str: Such as the local path of an image file or PDF file: /root/data/img.jpg; URL link, such as the network URL of an image file or PDF file: Example; Local directory, which should contain images to be predicted, such as the local path: /root/data/ (currently does not support prediction of PDF files in directories, PDF files need to be specified to the specific file path).
  • List: List elements need to be of the above types, such as [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"].
None
device The device for pipeline inference. str|None
  • CPU: Such as cpu to use CPU for inference;
  • GPU: Such as gpu:0 to use the first GPU for inference;
  • NPU: Such as npu:0 to use the first NPU for inference;
  • XPU: Such as xpu:0 to use the first XPU for inference;
  • MLU: Such as mlu:0 to use the first MLU for inference;
  • DCU: Such as dcu:0 to use the first DCU for inference;
  • None: If set to None, it will default to the value initialized by the pipeline. During initialization, it will prioritize using the local GPU 0 device, and if not available, it will use the CPU device;
None
use_doc_orientation_classify Whether to use the document orientation classification module. bool|None
  • bool: True or False;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to True;
None
use_doc_unwarping Whether to use the document distortion correction module. bool|None
  • bool: True or False;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to True;
None
use_textline_orientation Whether to use the text line orientation classification module. bool|None
  • bool: True or False;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to True;
None
use_general_ocr Whether to use the OCR sub-pipeline. bool|None
  • bool: True or False;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to True;
None
use_seal_recognition Whether to use the seal recognition sub-pipeline. bool|None
  • bool: True or False;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to True;
None
use_table_recognition Whether to use the table recognition sub-pipeline. bool|None
  • bool: True or False;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to True;
None
layout_threshold The score threshold for the layout model. float|dict|None
  • float: Any floating-point number between 0-1;
  • dict: {0:0.1} where the key is the category ID and the value is the threshold for that category;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 0.5;
None
layout_nms Whether to use NMS. bool|None
  • bool: True or False;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to True;
None
layout_unclip_ratio The expansion coefficient for layout detection. float|Tuple[float,float]|dict|None
  • float: Any floating-point number greater than 0;
  • Tuple[float,float]: The expansion coefficients in the horizontal and vertical directions, respectively;
  • dict, keys as int representing cls_id, values as float scaling factors for each category.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 1.0;
None
layout_merge_bboxes_mode The method for filtering overlapping bounding boxes. str|dict|None
  • str: large, small, union. Respectively representing retaining the larger box, smaller box, or both when overlapping boxes are filtered.
  • dict, keys as int representing cls_id and values as merging modes for each category.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to large;
None
text_det_limit_side_len The side length limit for text detection images. int|None
  • int: Any integer greater than 0;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 960;
None
text_det_limit_type The type of side length limit for text detection images. str|None
  • str: Supports min and max, where min ensures that the shortest side of the image is not less than det_limit_side_len, and max ensures that the longest side of the image is not greater than limit_side_len.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to max;
None
text_det_thresh The pixel threshold for detection. In the output probability map, pixel points with scores greater than this threshold will be considered as text pixels. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 0.3.
None
text_det_box_thresh The bounding box threshold for detection. When the average score of all pixel points within the detection result bounding box is greater than this threshold, the result will be considered as a text region. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 0.6.
None
text_det_unclip_ratio The expansion coefficient for text detection. This method is used to expand the text region, and the larger the value, the larger the expansion area. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 2.0.
None
text_rec_score_thresh The text recognition threshold. Text results with scores greater than this threshold will be retained. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 0.0. I.e., no threshold is set.
None
seal_det_limit_side_len The side length limit for seal detection images. int|None
  • int: Any integer greater than 0;
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 960;
None
seal_det_limit_type The type of side length limit for seal detection images. str|None
  • str: Supports min and max, where min ensures that the shortest side of the image is not less than det_limit_side_len, and max ensures that the longest side of the image is not greater than limit_side_len.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to max;
None
seal_det_thresh The pixel threshold for detection. In the output probability map, pixel points with scores greater than this threshold will be considered as seal pixels. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 0.3.
None
seal_det_box_thresh The bounding box threshold for detection. When the average score of all pixel points within the detection result bounding box is greater than this threshold, the result will be considered as a seal region. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 0.6.
None
seal_det_unclip_ratio The expansion coefficient for seal detection. This method is used to expand the seal region, and the larger the value, the larger the expansion area. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 2.0.
None
seal_rec_score_thresh The seal recognition threshold. Text results with scores greater than this threshold will be retained. float|None
  • float: Any floating-point number greater than 0.
  • None: If set to None, it will default to the value initialized by the pipeline, initialized to 0.0. I.e., no threshold is set.
None
(3) Process the Visual Prediction Results. The prediction result for each sample is of `dict` type, containing two fields: `visual_info` and `layout_parsing_result`. You can obtain visual information through `visual_info` (including `normal_text_dict`, `table_text_list`, `table_html_list`, etc.), and place the information for each sample into the `visual_info_list` list, which will be fed into the large language model later. Of course, you can also obtain the layout parsing results through `layout_parsing_result`, which includes tables, text, images, and other content contained in the document or image. It supports operations such as printing, saving as an image, and saving as a `json` file:
......
for res in visual_predict_res:
    visual_info_list.append(res["visual_info"])
    layout_parsing_result = res["layout_parsing_result"]
    layout_parsing_result.print()
    layout_parsing_result.save_to_img("./output")
    layout_parsing_result.save_to_json("./output")
    layout_parsing_result.save_to_xlsx("./output")
    layout_parsing_result.save_to_html("./output")
......
Method Method Description Parameter Parameter Type Parameter Description Default Value
print() Prints the result to the terminal format_json bool Whether to format the output content with JSON indentation True
indent int Specifies the indentation level to beautify the output JSON data for better readability, only valid when format_json is True 4
ensure_ascii bool Controls whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters, only valid when format_json is True False
save_to_json() Saves the result as a json file save_path str The path to save the file. When it is a directory, the saved file name is consistent with the input file type N/A
indent int Specifies the indentation level to beautify the output JSON data for better readability, only valid when format_json is True 4
ensure_ascii bool Controls whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False retains the original characters, only valid when format_json is True False
save_to_img() Saves the visual images of each intermediate module in png format save_path str The path to save the file, supports directory or file path N/A
save_to_html() Saves the tables in the file as html files save_path str The path to save the file, supports directory or file path N/A
save_to_xlsx() Saves the tables in the file as xlsx files save_path str The path to save the file, supports directory or file path N/A
- Calling the `print()` method will print the results to the terminal. The content printed to the terminal is explained as follows: - `input_path`: `(str)` The input path of the image to be predicted - `page_index`: `(Union[int, None])` If the input is a PDF file, it indicates the current page number of the PDF; otherwise, it is `None` - `model_settings`: `(Dict[str, bool])` Model parameters required for the pipeline - `use_doc_preprocessor`: `(bool)` Controls whether to enable the document preprocessing pipeline - `use_general_ocr`: `(bool)` Controls whether to enable the OCR pipeline - `use_seal_recognition`: `(bool)` Controls whether to enable the seal recognition pipeline - `use_table_recognition`: `(bool)` Controls whether to enable the table recognition pipeline - `use_formula_recognition`: `(bool)` Controls whether to enable the formula recognition pipeline - `parsing_res_list`: `(List[Dict])` A list of parsing results, each element is a dictionary, and the list order is the reading order after parsing. - `block_bbox`: `(np.ndarray)` The bounding box of the layout area. - `block_label`: `(str)` The label of the layout area, such as `text`, `table`, etc. - `block_content`: `(str)` The content within the layout area. - `overall_ocr_res`: `(Dict[str, Union[List[str], List[float], numpy.ndarray]])` A dictionary of global OCR results - `input_path`: `(Union[str, None])` The image path accepted by the OCR pipeline, when the input is `numpy.ndarray`, it is saved as `None` - `model_settings`: `(Dict)` Model configuration parameters for the OCR pipeline - `dt_polys`: `(List[numpy.ndarray])` A list of polygon boxes for text detection. Each detection box is represented by a numpy array of 4 vertex coordinates, with a shape of (4, 2) and a data type of int16 - `dt_scores`: `(List[float])` A list of confidence scores for text detection boxes - `text_det_params`: `(Dict[str, Dict[str, int, float]])` Configuration parameters for the text detection module - `limit_side_len`: `(int)` The side length limit for image preprocessing - `limit_type`: `(str)` The processing method for the side length limit - `thresh`: `(float)` The confidence threshold for text pixel classification - `box_thresh`: `(float)` The confidence threshold for text detection boxes - `unclip_ratio`: `(float)` The inflation coefficient for text detection boxes - `text_type`: `(str)` The type of text detection, currently fixed as "general" - `text_type`: `(str)` The type of text detection, currently fixed as "general" - `textline_orientation_angles`: `(List[int])` The prediction results of text line orientation classification. When enabled, it returns actual angle values (e.g., [0,0,1]) - `text_rec_score_thresh`: `(float)` The filtering threshold for text recognition results - `rec_texts`: `(List[str])` A list of text recognition results, only including texts with confidence exceeding `text_rec_score```markdown - Calling the `save_to_json()` method will save the aforementioned content to the specified `save_path`. If a directory is specified, the save path will be `save_path/{your_img_basename}.json`. If a file is specified, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, `numpy.array` types will be converted to list form. - Calling the `save_to_img()` method will save the visualization results to the specified `save_path`. If a directory is specified, the save path will be `save_path/{your_img_basename}_ocr_res_img.{your_img_extension}`. If a file is specified, it will be saved directly to that file. (Production pipelines often involve numerous result images, so it is not recommended to specify a specific file path directly, as multiple images will be overwritten, leaving only the last one.) In addition, it is also supported to obtain visualized images with results and prediction results through attributes, as detailed below:
Attribute Attribute Description
json Obtain prediction results in json format
img Obtain visualized images in dict format
- The prediction result obtained by the `json` attribute is data of type `dict`, with content consistent with that saved by calling the `save_to_json()` method. - The prediction result returned by the `img` attribute is data of type `dict`. The keys are `layout_det_res`, `overall_ocr_res`, `text_paragraphs_ocr_res`, `formula_res_region1`, `table_cell_img`, and `seal_res_region1`, with corresponding values being `Image.Image` objects: used for displaying visualized images of layout detection, OCR, OCR text paragraphs, formulas, tables, and seal results, respectively. If optional modules are not used, only `layout_det_res` will be included in the dictionary.
(4) Call the build_vector() method of the PP-ChatOCRv4 pipeline object to construct vectors for text content. Below are the parameters and their descriptions for the `build_vector()` method:
Parameter Parameter Description Parameter Type Options Default Value
visual_info Visual information, which can be a dictionary containing visual information or a list composed of such dictionaries list|dict None None
min_characters Minimum number of characters int A positive integer greater than 0, determined based on the token length supported by the large language model 3500
block_size Chunk size for establishing a vector library for long text int A positive integer greater than 0, determined based on the token length supported by the large language model 300
flag_save_bytes_vector Whether to save text as a binary file bool True|False False
retriever_config Configuration parameters for the vector retrieval large model, referring to the "LLM_Retriever" field in the configuration file dict None None
This method returns a dictionary containing visual text information, with the following content: - `flag_save_bytes_vector`: `(bool)` Whether the result is saved as a binary file - `flag_too_short_text`: `(bool)` Whether the text length is less than the minimum number of characters - `vector`: `(str|list)` Binary content or text content of the text, depending on the values of `flag_save_bytes_vector` and `min_characters`. If `flag_save_bytes_vector=True` and the text length is greater than or equal to the minimum number of characters, binary content is returned; otherwise, the original text is returned.
(5) Call the mllm_pred() method of the PP-ChatOCRv4 pipeline object to obtain multimodal large model extraction results. Below are the parameters and their descriptions for the `mllm_pred()` method:
Parameter Parameter Description Parameter Type Options Default Value
input Data to be predicted, supporting multiple input types, required Python Var|str
  • Python Var: Such as numpy.ndarray representing image data
  • str: Local path of an image file or a single-page PDF file, e.g., /root/data/img.jpg; or URL link, such as the network URL of an image file or a single-page PDF file: Example;
None
key_list A single key or a list of keys used to extract information Union[str, List[str]] None None
mllm_chat_bot_config Configuration parameters for the multimodal large model, referring to the "MLLM_Chat" field in the configuration file dict None None
(6) Call the chat() method of the PP-ChatOCRv4 pipeline object to extract key information. Below are the parameters and their descriptions for the `chat()` method:
Parameter Parameter Description Parameter Type Options Default Value
key_list A single key or a list of keys used to extract information Union[str, List[str]] None None
visual_info Visual information results List[dict] None None
use_vector_retrieval Whether to use vector retrieval bool True|False True
vector_info Vector information for retrieval dict None None
min_characters Minimum number of characters required int A positive integer greater than 0 3500
text_task_description Description of the text task str None None
text_output_format Output format of the text result str None None
text_rules_str Rules for generating text results str None None
text_few_shot_demo_text_content Text content for few-shot demonstration str None None
text_few_shot_demo_key_value_list Key-value list for few-shot demonstration str None None
table_task_description Description of the table task str None None
table_output_format Output format of the table result str None None
table_rules_str Rules for generating table results str None None
table_few_shot_demo_text_content Text content for table few-shot demonstration str None None
table_few_shot_demo_key_value_list Key-value list for table few-shot demonstration str None None
mllm_predict_info Results from the multimodal large language model dict None None
mllm_integration_strategy Integration strategy for multimodal large language model and large language model data, supporting the use of either alone or the fusion of both results str "integration" "integration", "llm_only", and "mllm_only"
chat_bot_config Configuration information for the large language model, with content referring to the "LLM_Chat" field in the pipeline configuration file dict None None
retriever_config Configuration parameters for the vector retrieval large model, with content referring to the "LLM_Retriever" field in the configuration file dict None None
This method will print the results to the terminal. The content printed to the terminal is explained as follows: - `chat_res`: `(dict)` The result of information extraction, which is a dictionary containing the keys to be extracted and their corresponding values.

3. Development Integration/Deployment

If the pipeline meets your requirements for inference speed and accuracy in production, you can proceed directly with development integration/deployment.

If you need to apply the pipeline directly in your Python project, you can refer to the sample code in 2.2 Local Experience.

Additionally, PaddleX provides three other deployment methods, detailed as follows:

🚀 High-Performance Inference: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides a high-performance inference plugin aimed at deeply optimizing model inference and pre/post-processing to significantly speed up the end-to-end process. For detailed instructions on high-performance inference, please refer to the PaddleX High-Performance Inference Guide.

☁️ Serving: Serving is a common deployment form in actual production environments. By encapsulating the inference functionality as a service, clients can access these services through network requests to obtain inference results. PaddleX supports multiple serving solutions for pipelines. For detailed instructions on serving, please refer to the PaddleX Serving Guide.

Below are the API references for basic serving and multi-language service invocation examples:

API Reference

For the main operations provided by the service:

  • The HTTP request method is POST.
  • Both the request body and response body are JSON data (JSON objects).
  • When the request is successfully processed, the response status code is 200, and the response body has the following properties:
Name Type Meaning
logId string UUID of the request.
errorCode integer Error code. Fixed at 0.
errorMsg string Error description. Fixed at "Success".
result object Operation result.
  • When the request is not successfully processed, the response body has the following properties:
Name Type Meaning
logId string UUID of the request.
errorCode integer Error code. Same as the response status code.
errorMsg string Error description.

The main operations provided by the service are as follows:

  • analyzeImages

Uses computer vision models to analyze images, obtain OCR, table recognition results, etc., and extract key information from the images.

POST /chatocr-visual

  • Properties of the request body:
Name Type Meaning Required
file string URL of an image file or PDF file accessible to the server, or Base64 encoded result of the content of the above file types. By default, for PDF files exceeding 10 pages, only the content of the first 10 pages will be processed.
To remove the page limit, please add the following configuration to the pipeline configuration file:
Serving:
  extra:
    max_num_input_imgs: null
Yes
fileType integer | null File type. 0 represents a PDF file, 1 represents an image file. If this property is not present in the request body, the file type will be inferred based on the URL. No
useDocOrientationClassify boolean | null Please refer to the description of the use_doc_orientation_classify parameter of the pipeline object's visual_predict method. No
useDocUnwarping boolean | null Please refer to the description of the use_doc_unwarping parameter of the pipeline object's visual_predict method. No
useSealRecognition boolean | null Please refer to the description of the use_seal_recognition parameter of the pipeline object's visual_predict method. No
useTableRecognition boolean | null Please refer to the description of the use_table_recognition parameter of the pipeline object's visual_predict method. No
layoutThreshold number | null Please refer to the description of the layout_threshold parameter of the pipeline object's visual_predict method. No
layoutNms boolean | null Please refer to the description of the layout_nms parameter of the pipeline object's visual_predict method. No
layoutUnclipRatio number | array | object | null Please refer to the description of the layout_unclip_ratio parameter of the pipeline object's visual_predict method. No
layoutMergeBboxesMode string | object | null Please refer to the description of the layout_merge_bboxes_mode parameter of the pipeline object's visual_predict method. No
textDetLimitSideLen integer | null Please refer to the description of the text_det_limit_side_len parameter of the pipeline object's visual_predict method. No
textDetLimitType string | null Please refer to the description of the text_det_limit_type parameter of the pipeline object's visual_predict method. No
textDetThresh number | null Please refer to the description of the text_det_thresh parameter of the pipeline object's visual_predict method. No
textDetBoxThresh number | null Please refer to the description of the text_det_box_thresh parameter of the pipeline object's visual_predict method. No
textDetUnclipRatio number | null Please refer to the description of the text_det_unclip_ratio parameter of the pipeline object's visual_predict method. No
textRecScoreThresh number | null Please refer to the description of the text_rec_score_thresh parameter of the pipeline object's visual_predict method. No
sealDetLimitSideLen integer | null Please refer to the description of the seal_det_limit_side_len parameter of the pipeline object's visual_predict method. No
sealDetLimitType string | null Please refer to the description of the seal_det_limit_type parameter of the pipeline object's visual_predict method. No
sealDetThresh number | null Please refer to the description of the seal_det_thresh parameter of the pipeline object's visual_predict method. No
sealDetBoxThresh number | null Please refer to the description of the seal_det_box_thresh parameter of the pipeline object's visual_predict method. No
sealDetUnclipRatio number | null Please refer to the description of the seal_det_unclip_ratio parameter of the pipeline object's visual_predict method. No
sealRecScoreThresh number | null Please refer to the description of the seal_rec_score_thresh parameter of the pipeline object's visual_predict method. No
  • When the request is successfully processed, the result of the response body has the following properties:
Name Type Meaning
layoutParsingResults array Analysis results obtained using computer vision models. The array length is 1 (for image input) or the actual number of document pages processed (for PDF input). For PDF input, each element in the array represents the result of each page actually processed in the PDF file.
visualInfo array Key information in the image, which can be used as input for other operations.
dataInfo object Input data information.

Each element in layoutParsingResults is an object with the following properties:

Name Type Meaning
prunedResult object A simplified version of the res field in the JSON representation of the results generated by the pipeline's visual_predict method, with the input_path and the page_index fields removed.
outputImages object | null Refer to the description of img attribute of the pipeline's visual prediction result.
inputImage string | null Input image. The image is in JPEG format and encoded using Base64.
  • buildVectorStore

Builds a vector database.

POST /chatocr-vector

  • Properties of the request body:
Name Type Meaning Required
visualInfo array Key information in the image. Provided by the analyzeImages operation. Yes
minCharacters integer | null Minimum data length to enable the vector database. No
blockSize int | null Please refer to the description of the block_size parameter of the pipeline object's build_vector method. No
retrieverConfig object | null Please refer to the description of the retriever_config parameter of the pipeline object's build_vector method. No
  • When the request is successfully processed, the result of the response body has the following properties:
Name Type Meaning
vectorInfo object Serialized result of the vector database, which can be used as input for other operations.
  • invokeMLLM
  • Invoke the MLLM.

    POST /chatocr-mllm

    • Properties of the request body:
    Name Type Meaning Required
    image string URL of an image file accessible by the server or the Base64-encoded content of the image file. Yes
    keyList array List of keys. Yes
    mllmChatBotConfig object | null Please refer to the description of the mllm_chat_bot_config parameter of the pipeline object's mllm_pred method. No
    • When the request is successfully processed, the result of the response body has the following property:
    Name Type Meaning
    mllmPredictInfo object MLLM invocation result.
    • chat

    Interacts with large language models to extract key information using them.

    POST /chatocr-chat

    • Properties of the request body:
    Name Type Meaning Required
    keyList array List of keys. Yes
    visualInfo object Key information in the image. Provided by the analyzeImages operation. Yes
    useVectorRetrieval boolean | null Please refer to the description of the use_vector_retrieval parameter of the pipeline object's chat method. No
    vectorInfo object | null Serialized result of the vector database. Provided by the buildVectorStore operation. Please note that the deserialization process involves performing an unpickle operation. To prevent malicious attacks, be sure to use data from trusted sources. No
    minCharacters integer Minimum data length to enable the vector database. No
    textTaskDescription string | null Please refer to the description of the text_task_description parameter of the pipeline object's chat method. No
    textOutputFormat string | null Please refer to the description of the text_output_format parameter of the pipeline object's chat method. No
    textRulesStr string | null Please refer to the description of the text_rules_str parameter of the pipeline object's chat method. No
    textFewShotDemoTextContent string | null Please refer to the description of the text_few_shot_demo_text_content parameter of the pipeline object's chat method. No
    textFewShotDemoKeyValueList string | null Please refer to the description of the text_few_shot_demo_key_value_list parameter of the pipeline object's chat method. No
    tableTaskDescription string | null Please refer to the description of the table_task_description parameter of the pipeline object's chat method. No
    tableOutputFormat string | null Please refer to the description of the table_output_format parameter of the pipeline object's chat method. No
    tableRulesStr string | null Please refer to the description of the table_rules_str parameter of the pipeline object's chat method. No
    tableFewShotDemoTextContent string | null Please refer to the description of the table_few_shot_demo_text_content parameter of the pipeline object's chat method. No
    tableFewShotDemoKeyValueList string | null Please refer to the description of the table_few_shot_demo_key_value_list parameter of the pipeline object's chat method. No
    mllmPredictInfo object | null MLLM invocation result. Provided by the invokeMllm operation. No
    mllmIntegrationStrategy string | null Please refer to the description of the mllm_integration_strategy parameter of the pipeline object's chat method. No
    chatBotConfig object | null Please refer to the description of the chat_bot_config parameter of the pipeline object's chat method. No
    retrieverConfig object | null Please refer to the description of the retriever_config parameter of the pipeline object's chat method. No
    • When the request is successfully processed, the result of the response body has the following properties:
    Name Type Meaning
    chatResult object Key information extraction result.
  • Note:
  • Including sensitive parameters such as API key for large model calls in the request body can be a security risk. If not necessary, set these parameters in the configuration file and do not pass them on request.

    Multi-language Service Invocation Examples
    Python
    
    # This script only shows the use case for images. For calling with other file types, please read the API reference and make adjustments.
    
    import base64
    import pprint
    import sys
    import requests
    
    
    API_BASE_URL = "http://0.0.0.0:8080"
    
    image_path = "./demo.jpg"
    keys = ["name"]
    
    with open(image_path, "rb") as file:
        image_bytes = file.read()
        image_data = base64.b64encode(image_bytes).decode("ascii")
    
    payload = {
        "file": image_data,
        "fileType": 1,
    }
    
    resp_visual = requests.post(url=f"{API_BASE_URL}/chatocr-visual", json=payload)
    if resp_visual.status_code != 200:
        print(
            f"Request to chatocr-visual failed with status code {resp_visual.status_code}."
        )
        pprint.pp(resp_visual.json())
        sys.exit(1)
    result_visual = resp_visual.json()["result"]
    
    for i, res in enumerate(result_visual["layoutParsingResults"]):
        print(res["prunedResult"])
        for img_name, img in res["outputImages"].items():
            img_path = f"{img_name}_{i}.jpg"
            with open(img_path, "wb") as f:
                f.write(base64.b64decode(img))
            print(f"Output image saved at {img_path}")
    
    payload = {
        "visualInfo": result_visual["visualInfo"],
    }
    resp_vector = requests.post(url=f"{API_BASE_URL}/chatocr-vector", json=payload)
    if resp_vector.status_code != 200:
        print(
            f"Request to chatocr-vector failed with status code {resp_vector.status_code}."
        )
        pprint.pp(resp_vector.json())
        sys.exit(1)
    result_vector = resp_vector.json()["result"]
    
    payload = {
        "image": image_data,
        "keyList": keys,
    }
    resp_mllm = requests.post(url=f"{API_BASE_URL}/chatocr-mllm", json=payload)
    if resp_mllm.status_code != 200:
        print(
            f"Request to chatocr-mllm failed with status code {resp_mllm.status_code}."
        )
        pprint.pp(resp_mllm.json())
        sys.exit(1)
    result_mllm = resp_mllm.json()["result"]
    
    payload = {
        "keyList": keys,
        "visualInfo": result_visual["visualInfo"],
        "useVectorRetrieval": True,
        "vectorInfo": result_vector["vectorInfo"],
        "mllmPredictInfo": result_mllm["mllmPredictInfo"],
    }
    resp_chat = requests.post(url=f"{API_BASE_URL}/chatocr-chat", json=payload)
    if resp_chat.status_code != 200:
        print(
            f"Request to chatocr-chat failed with status code {resp_chat.status_code}."
        )
        pprint.pp(resp_chat.json())
        sys.exit(1)
    result_chat = resp_chat.json()["result"]
    print("Final result:")
    print(result_chat["chatResult"])
    


    📱 Edge Deployment: Edge deployment is a method where computing and data processing functions are placed on the user's device itself. The device can directly process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed instructions on edge deployment, please refer to the PaddleX Edge Deployment Guide. You can choose an appropriate deployment method for your pipeline based on your needs and proceed with subsequent AI application integration.

    4. Custom Development

    If the default model weights provided by the Document Scene Information Extraction v4 Pipeline do not meet your expectations in terms of accuracy or speed in your specific scenario, you can try to further fine-tune the existing models using data from your specific domain or application scenario to enhance the recognition performance in your context.

    4.1 Model Fine-Tuning

    Since the Document Scene Information Extraction v4 Pipeline consists of several modules, suboptimal performance may stem from any of these modules. You can analyze cases with poor extraction results, identify which module is problematic through visual image inspection, and refer to the fine-tuning tutorial links in the table below for model fine-tuning.

    Scenario Module to Fine-Tune Fine-Tuning Reference Link
    Inaccurate layout area detection, such as missed detection of seals or tables Layout Area Detection Module Link
    Inaccurate table structure recognition Table Structure Recognition Link
    Missed detection of seal text Seal Text Detection Module Link
    Missed detection of text Text Detection Module Link
    Inaccurate text content Text Recognition Module Link
    Inaccurate correction of vertical or rotated text lines Text Line Orientation Classification Module Link
    Inaccurate correction of overall image rotation Document Image Orientation Classification Module Link
    Inaccurate correction of image distortion Text Image Rectification Module Fine-tuning Not Supported Yet

    4.2 Model Deployment

    After fine-tuning using your private dataset, you will obtain local model weights files.

    To use the fine-tuned model weights, you only need to modify the pipeline configuration file by replacing the path to the default model weights with the path to your fine-tuned model weights in the corresponding location:

    ......
    SubModules:
        TextDetection:
        module_name: text_detection
        model_name: PP-OCRv5_server_det
        model_dir: null # Replace with the path to the fine-tuned text detection model weights
        limit_side_len: 960
        limit_type: max
        max_side_limit: 4000
        thresh: 0.3
        box_thresh: 0.6
        unclip_ratio: 1.5
    
        TextRecognition:
        module_name: text_recognition
        model_name: PP-OCRv5_server_rec
        model_dir: null # Replace with the path to the fine-tuned text recognition model weights
        batch_size: 1
        score_thresh: 0
    ......
    

    Subsequently, refer to the command line method or Python script method in 2.2 Local Experience to load the modified pipeline configuration file.

    5. Multi-Hardware Support

    PaddleX supports various mainstream hardware devices such as NVIDIA GPUs, Kunlun XPU, Ascend NPU, and Cambricon MLU, allowing seamless switching between different hardware by simply setting the device parameter.

    For example, when using the Document Scene Information Extraction v4 Pipeline, to change the running device from an NVIDIA GPU to an Ascend NPU, you only need to modify the device in the script to npu:

    from paddlex import create_pipeline
    pipeline = create_pipeline(
        pipeline="PP-ChatOCRv4-doc",
        device="npu:0" # gpu:0 --> npu:0
        )
    

    If you want to use the General Document Scene Information Extraction v4 Pipeline on more types of hardware, please refer to the PaddleX Multi-Device Usage Guide.

    Comments