Skip to content

Seal Text Recognition Pipeline Usage Tutorial

1. Introduction to Seal Text Recognition Pipeline

Seal text recognition is a technology that automatically extracts and recognizes the content of seals from documents or images. The recognition of seal text is part of document processing and has many applications in various scenarios, such as contract comparison, warehouse entry and exit review, and invoice reimbursement review.

The seal text recognition pipeline is used to recognize the text content of seals, extracting the text information from seal images and outputting it in text form. This pipeline integrates the industry-renowned end-to-end OCR system PP-OCRv4, supporting the detection and recognition of curved seal text. Additionally, this pipeline integrates an optional layout region localization module, which can accurately locate the layout position of the seal within the entire document. It also includes optional document image orientation correction and distortion correction functions. Based on this pipeline, millisecond-level accurate text content prediction can be achieved on a CPU. This pipeline also provides flexible service deployment methods, supporting the use of multiple programming languages on various hardware. Moreover, it offers custom development capabilities, allowing you to train and fine-tune on your own dataset based on this pipeline, and the trained model can be seamlessly integrated.

The seal text recognition pipeline includes a seal text detection module and a text recognition module, as well as optional layout detection module, document image orientation classification module, and text image correction module.

In this pipeline, you can choose the model to use based on the benchmark data below.

Layout Region Detection Module (Optional): * Layout detection model, including 20 common categories: document title, paragraph title, text, page number, abstract, table of contents, references, footnotes, header, footer, algorithm, formula, formula number, image, table, figure and table title (figure title, table title, and chart title), seal, chart, sidebar text, and reference content
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PP-DocLayout_plus-LInference Model/Training Model 83.2 34.6244 / 10.3945 510.57 / - 126.01 M A higher precision layout region localization model based on RT-DETR-L trained on a self-built dataset including Chinese and English papers, multi-column magazines, newspapers, PPTs, contracts, books, exam papers, research reports, ancient books, Japanese documents, and vertical text documents
* Layout detection model, including 23 common categories: document title, paragraph title, text, page number, abstract, table of contents, references, footnotes, header, footer, algorithm, formula, formula number, image, chart title, table, table title, seal, chart title, chart, header image, footer image, sidebar text
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PP-DocLayout-LInference Model/Training Model 90.4 34.6244 / 10.3945 510.57 / - 123.76 M A high precision layout region localization model based on RT-DETR-L trained on a self-built dataset including Chinese and English papers, magazines, contracts, books, exam papers, and research reports
PP-DocLayout-MInference Model/Training Model 75.2 13.3259 / 4.8685 44.0680 / 44.0680 22.578 A balanced model of accuracy and efficiency based on PicoDet-L trained on a self-built dataset including Chinese and English papers, magazines, contracts, books, exam papers, and research reports
PP-DocLayout-SInference Model/Training Model 70.9 8.3008 / 2.3794 10.0623 / 9.9296 4.834 A highly efficient layout region localization model based on PicoDet-S trained on a self-built dataset including Chinese and English papers, magazines, contracts, books, exam papers, and research reports
>❗ Listed above are the 4 core models that are the focus of the layout detection module, which supports a total of 13 full models, including multiple models with pre-defined different categories, among which 9 models include the seal category. Apart from the 3 core models mentioned above, the remaining models are as follows:
👉Details of the Model List * 3-class layout detection model, including table, image, seal
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PicoDet-S_layout_3clsInference Model/Training Model 88.2 8.99 / 2.22 16.11 / 8.73 4.8 A highly efficient layout region localization model based on the lightweight PicoDet-S model trained on a self-built dataset including Chinese and English papers, magazines, and research reports
PicoDet-L_layout_3clsInference Model/Training Model 89.0 13.05 / 4.50 41.30 / 41.30 22.6 An efficiency-accuracy balanced layout region localization model based on PicoDet-L trained on a self-built dataset including Chinese and English papers, magazines, and research reports
RT-DETR-H_layout_3clsInference Model/Training Model 95.8 114.93 / 27.71 947.56 / 947.56 470.1 A high precision layout region localization model based on RT-DETR-H trained on a self-built dataset including Chinese and English papers, magazines, and research reports
* 17-class region detection model, including 17 common layout categories: paragraph title, image, text, number, abstract, content, chart title, formula, table, table title, references, document title, footnote, header, algorithm, footer, seal
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PicoDet-S_layout_17clsInference Model/Training Model 87.4 9.11 / 2.12 15.42 / 9.12 4.8 A highly efficient layout region localization model based on the lightweight PicoDet-S model trained on a self-built dataset including Chinese and English papers, magazines, and research reports
PicoDet-L_layout_17clsInference Model/Training Model 89.0 13.50 / 4.69 43.32 / 43.32 22.6 An efficiency-accuracy balanced layout region localization model based on PicoDet-L trained on a self-built dataset including Chinese and English papers, magazines, and research reports
RT-DETR-H_layout_17clsInference Model/Training Model 98.3 115.29 / 104.09 995.27 / 995.27 470.2 A high precision layout region localization model based on RT-DETR-H trained on a self-built dataset including Chinese and English papers, magazines, and research reports
Document Image Orientation Classification Module (Optional):
ModelModel Download Link Top-1 Acc (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PP-LCNet_x1_0_doc_oriInference Model/Training Model 99.06 2.31 / 0.43 3.37 / 1.27 7 A document image classification model based on PP-LCNet_x1_0, containing four categories: 0 degrees, 90 degrees, 180 degrees, and 270 degrees
Text Image Correction Module (Optional):
ModelModel Download Link CER Model Storage Size (M) Description
UVDocInference Model/Training Model 0.179 30.3 M A high precision text image correction model
Seal Text Detection Module:
ModelModel Download Link Detection Hmean (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PP-OCRv4_server_seal_detInference Model/Training Model 98.40 74.75 / 67.72 382.55 / 382.55 109 PP-OCRv4 server-side seal text detection model, with higher accuracy, suitable for deployment on better servers
PP-OCRv4_mobile_seal_detInference Model/Training Model 96.36 7.82 / 3.09 48.28 / 23.97 4.6 PP-OCRv4 mobile-side seal text detection model, with higher efficiency, suitable for deployment on the edge
Text Recognition Module:
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PP-OCRv5_server_recInference Model/Training Model 86.38 8.45/2.36 122.69/122.69 81 M PP-OCRv5_rec is a new generation text recognition model. This model aims to efficiently and accurately support the recognition of four major languages: Simplified Chinese, Traditional Chinese, English, and Japanese, as well as complex text scenes like handwriting, vertical text, pinyin, and rare characters with a single model. It balances recognition effectiveness, inference speed, and model robustness, providing efficient and accurate technical support for document understanding in various scenarios.
PP-OCRv5_mobile_recInference Model/Training Model 81.29 1.46/5.43 5.32/91.79 16 M
PP-OCRv4_server_rec_docInference Model/Training Model 86.58 6.65 / 2.38 32.92 / 32.92 181 M PP-OCRv4_server_rec_doc is trained on a mix of more Chinese document data and PP-OCR training data based on PP-OCRv4_server_rec, enhancing recognition capabilities for some traditional Chinese characters, Japanese, and special characters, supporting over 15,000+ characters. Besides improving document-related text recognition, it also enhances general text recognition capabilities
PP-OCRv4_mobile_recInference Model/Training Model 83.28 4.82 / 1.20 16.74 / 4.64 88 M PP-OCRv4 lightweight recognition model, with high inference efficiency, can be deployed on multiple hardware devices, including edge devices
PP-OCRv4_server_rec Inference Model/Training Model 85.19 6.58 / 2.43 33.17 / 33.17 151 M PP-OCRv4 server-side model, with high inference accuracy, can be deployed on various servers
en_PP-OCRv4_mobile_recInference Model/Training Model 70.39 4.81 / 0.75 16.10 / 5.31 66 M An ultra-lightweight English recognition model trained based on the PP-OCRv4 recognition model, supporting English and number recognition
> ❗ Listed above are the 6 core models that are the focus of the text recognition module, which supports a total of 20 full models, including multiple multi-language text recognition models, with the complete model list as follows:
👉Details of the Model List * PP-OCRv5 Multi-Scene Model
ModelModel Download Link Chinese Recognition Avg Accuracy(%) English Recognition Avg Accuracy(%) Traditional Chinese Recognition Avg Accuracy(%) Japanese Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PP-OCRv5_server_recInference Model/Training Model 86.38 64.70 93.29 60.35 1.46/5.43 5.32/91.79 81 M PP-OCRv5_rec is a new generation text recognition model. This model aims to efficiently and accurately support the recognition of four major languages: Simplified Chinese, Traditional Chinese, English, and Japanese, as well as complex text scenes like handwriting, vertical text, pinyin, and rare characters with a single model. It balances recognition effectiveness, inference speed, and model robustness, providing efficient and accurate technical support for document understanding in various scenarios.
PP-OCRv5_mobile_recInference Model/Training Model 81.29 66.00 83.55 54.65 1.46/5.43 5.32/91.79 16 M
* Chinese Recognition Model
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
PP-OCRv4_server_rec_docInference Model/Training Model 86.58 6.65 / 2.38 32.92 / 32.92 91 M PP-OCRv4_server_rec_doc is trained on a mix of more Chinese document data and PP-OCR training data based on PP-OCRv4_server_rec, enhancing recognition capabilities for some traditional Chinese characters, Japanese, and special characters, supporting over 15,000+ characters. Besides improving document-related text recognition, it also enhances general text recognition capabilities
PP-OCRv4_mobile_recInference Model/Training Model 83.28 4.82 / 1.20 16.74 / 4.64 11 M PP-OCRv4 lightweight recognition model, with high inference efficiency, can be deployed on multiple hardware devices, including edge devices
PP-OCRv4_server_rec Inference Model/Training Model 85.19 6.58 / 2.43 33.17 / 33.17 87 M PP-OCRv4 server-side model, with high inference accuracy, can be deployed on various servers
PP-OCRv3_mobile_recInference Model/Training Model 75.43 5.87 / 1.19 9.07 / 4.28 11 M PP-OCRv3 lightweight recognition model, with high inference efficiency, can be deployed on multiple hardware devices, including edge devices
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
ch_SVTRv2_recInference Model/Training Model 68.81 8.08 / 2.74 50.17 / 42.50 73.9 M SVTRv2 is a server-side text recognition model developed by the OpenOCR team from Fudan University's Visual and Learning Lab (FVL), which won first place in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition, improving the end-to-end recognition accuracy on the A leaderboard by 6% compared to PP-OCRv4.
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
ch_RepSVTR_recInference Model/Training Model 65.07 5.93 / 1.62 20.73 / 7.32 22.1 M RepSVTR text recognition model is a mobile-side text recognition model based on SVTRv2, which won first place in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition, improving the end-to-end recognition accuracy on the B leaderboard by 2.5% compared to PP-OCRv4, with comparable inference speed.
* English Recognition Model
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
en_PP-OCRv4_mobile_recInference Model/Training Model 70.39 4.81 / 0.75 16.10 / 5.31 6.8 M An ultra-lightweight English recognition model trained based on the PP-OCRv4 recognition model, supporting English and number recognition
en_PP-OCRv3_mobile_recInference Model/Training Model 70.69 5.44 / 0.75 8.65 / 5.57 7.8 M An ultra-lightweight English recognition model trained based on the PP-OCRv3 recognition model, supporting English and number recognition
* Multilingual Recognition Model
ModelModel Download Link Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Storage Size (M) Description
korean_PP-OCRv3_mobile_recInference Model/Training Model 60.21 5.40 / 0.97 9.11 / 4.05 8.6 M An ultra-lightweight Korean recognition model trained based on the PP-OCRv3 recognition model, supporting Korean and number recognition
japan_PP-OCRv3_mobile_recInference Model/Training Model 45.69 5.70 / 1.02 8.48 / 4.07 8.8 M An ultra-lightweight Japanese recognition model trained based on the PP-OCRv3 recognition model, supporting Japanese and number recognition
chinese_cht_PP-OCRv3_mobile_recInference Model/Training Model 82.06 5.90 / 1.28 9.28 / 4.34 9.7 M An ultra-lightweight Traditional Chinese recognition model trained based on the PP-OCRv3 recognition model, supporting Traditional Chinese and number recognition
te_PP-OCRv3_mobile_recInference Model/Training Model 95.88 5.42 / 0.82 8.10 / 6.91 7.8 M An ultra-lightweight Telugu recognition model trained based on the PP-OCRv3 recognition model, supporting Telugu and number recognition
ka_PP-OCRv3_mobile_recInference Model/Training Model 96.96 5.25 / 0.79 9.09 / 3.86 8.0 M An ultra-lightweight Kannada recognition model trained based on the PP-OCRv3 recognition model, supporting Kannada and number recognition
ta_PP-OCRv3_mobile_recInference Model/Training Model 76.83 5.23 / 0.75 10.13 / 4.30 8.0 M An ultra-lightweight Tamil recognition model trained based on the PP-OCRv3 recognition model, supporting Tamil and number recognition
latin_PP-OCRv3_mobile_recInference Model/Training Model 76.93 5.20 / 0.79 8.83 / 7.15 7.8 M An ultra-lightweight Latin recognition model trained based on the PP-OCRv3 recognition model, supporting Latin and number recognition
arabic_PP-OCRv3_mobile_recInference Model/Training Model 73.55 5.35 / 0.79 8.80 / 4.56 7.8 M An ultra-lightweight Arabic letter recognition model trained based on the PP-OCRv3 recognition model, supporting Arabic letters and number recognition
cyrillic_PP-OCRv3_mobile_recInference Model/Training Model 94.28 5.23 / 0.76 8.89 / 3.88 7.9 M An ultra-lightweight Cyrillic letter recognition model trained based on the PP-OCRv3 recognition model, supporting Cyrillic letters and number recognition
devanagari_PP-OCRv3_mobile_recInference Model/Training Model 96.44 5.22 / 0.79 8.56 / 4.06 7.9 M An ultra-lightweight Devanagari letter recognition model trained based on the PP-OCRv3 recognition model, supporting Devanagari letters and number recognition
Test Environment Description:
  • Performance Test Environment
    • Test Dataset:
      • Document Image Orientation Classification Model: Self-built internal dataset covering multiple scenarios such as documents and certificates, containing 1000 images.
      • Text Image Correction Model: DocUNet.
      • Layout Region Detection Model: PaddleOCR self-built layout region detection dataset, containing 500 common document type images such as Chinese and English papers, magazines, contracts, books, exam papers, and research reports.
      • 3-Class Layout Detection Model: PaddleOCR self-built layout region detection dataset, containing 1154 common document type images such as Chinese and English papers, magazines, and research reports.
      • 17-Class Region Detection Model: PaddleOCR self-built layout region detection dataset, containing 892 common document type images such as Chinese and English papers, magazines, and research reports.
      • Text Detection Model: PaddleOCR self-built Chinese dataset covering multiple scenarios such as street scenes, web images, documents, and handwriting, where detection includes 500 images.
      • Chinese Recognition Model: PaddleOCR self-built Chinese dataset covering multiple scenarios such as street scenes, web images, documents, and handwriting, where text recognition includes 11,000 images.
      • ch_SVTRv2_rec: PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition A leaderboard evaluation set.
      • ch_RepSVTR_rec: PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition B leaderboard evaluation set.
      • English Recognition Model: Self-built internal English dataset.
      • Multilingual Recognition Model: Self-built internal multilingual dataset.
      • Text Line Orientation Classification Model: Self-built internal dataset covering multiple scenarios such as documents and certificates, containing 1000 images.
      • Seal Text Detection Model: Self-built internal dataset containing 500 circular seal images.
    • Hardware Configuration:
      • GPU: NVIDIA Tesla T4
      • CPU: Intel Xeon Gold 6271C @ 2.60GHz
      • Other Environment: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
  • Inference Mode Description
Mode GPU Configuration CPU Configuration Acceleration Technology Combination
Regular Mode FP32 Precision / No TRT Acceleration FP32 Precision / 8 Threads PaddleInference
High-Performance Mode Optimal combination of prior precision type and acceleration strategy FP32 Precision / 8 Threads Select optimal prior backend (Paddle/OpenVINO/TRT, etc.)


If you are more concerned with model accuracy, please choose a model with higher accuracy. If you are more concerned with inference speed, please choose a model with faster inference speed. If you are more concerned with model storage size, please choose a model with smaller storage size.

2. Quick Start

Before using the seal text recognition pipeline locally, please ensure that you have completed the installation of the wheel package according to the installation tutorial. Once the installation is complete, you can experience it locally via the command line or integrate it with Python.

2.1 Command Line Experience

You can quickly experience the seal_recognition pipeline effect with a single command:

paddleocr seal_recognition -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/seal_text_det.png \
    --use_doc_orientation_classify False \
    --use_doc_unwarping False

# Use --device to specify the use of GPU for model inference.
paddleocr seal_recognition -i ./seal_text_det.png --device gpu
The command line supports more parameter settings. Click to expand for detailed explanations of command line parameters.
Parameter Description Parameter Type Default Value
input Data to be predicted, required. Local path of image or PDF file, e.g., /root/data/img.jpg; URL link, e.g., network URL of image or PDF file: Example; Local directory, the directory should contain images to be predicted, e.g., local path: /root/data/ (currently does not support prediction of PDF files in directories; PDF files must be specified with a specific file path). str
save_path Specify the path to save the inference results file. If not set, the inference results will not be saved locally. str
doc_orientation_classify_model_name The name of the document orientation classification model. If not set, the default model in pipeline will be used. str
doc_orientation_classify_model_dir The directory path of the document orientation classification model. If not set, the official model will be downloaded. str
doc_unwarping_model_name The name of the text image unwarping model. If not set, the default model in pipeline will be used. str
doc_unwarping_model_dir The directory path of the text image unwarping model. If not set, the official model will be downloaded. str
layout_detection_model_name The name of the layout detection model. If not set, the default model in pipeline will be used. str
layout_detection_model_dir The directory path of the layout detection model. If not set, the official model will be downloaded. str
seal_text_detection_model_name The name of the seal text detection model. If not set, the pipeline's default model will be used. str
seal_text_detection_model_dir The directory path of the seal text detection model. If not set, the official model will be downloaded. str
text_recognition_model_name Name of the text recognition model. If not set, the default pipeline model is used. str
text_recognition_model_dir Directory path of the text recognition model. If not set, the official model is downloaded. str
text_recognition_batch_size Batch size for the text recognition model. If not set, defaults to 1. int
use_doc_orientation_classify Whether to load and use document orientation classification module. If not set, defaults to pipeline initialization value (True). bool
use_doc_unwarping Whether to load and use text image correction module. If not set, defaults to pipeline initialization value (True). bool
use_layout_detection Whether to load and use the layout detection module. If not set, the parameter will default to the value initialized in the pipeline, which is True. bool
layout_threshold Score threshold for the layout model. Any value between 0-1. If not set, the default value is used, which is 0.5. float
layout_nms Whether to use Non-Maximum Suppression (NMS) as post-processing for layout detection. If not set, the parameter will default to the value initialized in the pipeline, which is set to True by default. bool
layout_unclip_ratio Unclip ratio for detected boxes in layout detection model. Any float > 0. If not set, the default is 1.0. float
layout_merge_bboxes_mode The merging mode for the detection boxes output by the model in layout region detection.
  • large: When set to "large", only the largest outer bounding box will be retained for overlapping bounding boxes, and the inner overlapping boxes will be removed;
  • small: When set to "small", only the smallest inner bounding boxes will be retained for overlapping bounding boxes, and the outer overlapping boxes will be removed;
  • union: No filtering of bounding boxes will be performed, and both inner and outer boxes will be retained;
If not set, the default is large.
str
seal_det_limit_side_len Image side length limit for seal text detection. Any integer > 0. If not set, the default is 736. int
seal_det_limit_type Limit type for image side in seal text detection. Supports min and max; min ensures shortest side ≥ det_limit_side_len, max ensures longest side ≤ limit_side_len. If not set, the default is min. str
seal_det_thresh Pixel threshold. Pixels with scores above this value in the probability map are considered text. any float > 0. If not set, the default is 0.2. float
seal_det_box_thresh Box threshold. Boxes with average pixel scores above this value are considered text regions. any float > 0. If not set, the default is 0.6. float
seal_det_unclip_ratio Expansion ratio for seal text detection. Higher value means larger expansion area. Any float > 0. If not set, the default is 0.5. float
seal_rec_score_thresh Recognition score threshold. Text results above this value will be kept. Any float > 0. If not set, the default is 0.0 (no threshold). float
device The device used for inference. Support for specifying specific card numbers:
  • CPU: For example, cpu indicates using the CPU for inference.
  • GPU: For example, gpu:0 indicates using the first GPU for inference.
  • NPU: For example, npu:0 indicates using the first NPU for inference.
  • XPU: For example, xpu:0 indicates using the first XPU for inference.
  • MLU: For example, mlu:0 indicates using the first MLU for inference.
  • DCU: For example, dcu:0 indicates using the first DCU for inference.
If not set, the pipeline initialized value for this parameter will be used. During initialization, the local GPU device 0 will be preferred; if unavailable, the CPU device will be used.
str
enable_hpi Whether to enable high-performance inference. bool False
use_tensorrt Whether to use the Paddle Inference TensorRT subgraph engine.
For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6.
For Paddle with CUDA version 12.6, the compatible TensorRT version is 10.x (x>=5), and it is recommended to install TensorRT 10.5.0.18.
bool False
min_subgraph_size The minimum subgraph size, used to optimize the computation of model subgraphs. int 3
precision The computational precision, such as fp32, fp16. str fp32
enable_mkldnn Whether to enable MKL-DNN acceleration for inference. If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set. bool True
cpu_threads The number of threads used for inference on the CPU. int 8
paddlex_config Path to PaddleX pipeline configuration file. str


After running, the results will be printed to the terminal, as follows:

{'res': {'input_path': './seal_text_det.png', 'model_settings': {'use_doc_preprocessor': True, 'use_layout_detection': True}, 'doc_preprocessor_res': {'input_path': None, 'page_index': None, 'model_settings': {'use_doc_orientation_classify': False, 'use_doc_unwarping': False}, 'angle': -1}, 'layout_det_res': {'input_path': None, 'page_index': None, 'boxes': [{'cls_id': 16, 'label': 'seal', 'score': 0.975529670715332, 'coordinate': [6.191284, 0.16680908, 634.39325, 628.85345]}]}, 'seal_res_list': [{'input_path': None, 'page_index': None, 'model_settings': {'use_doc_preprocessor': False, 'use_textline_orientation': False}, 'dt_polys': [array([[320,  38],
       ...,
       [315,  38]]), array([[461, 347],
       ...,
       [456, 346]]), array([[439, 445],
       ...,
       [434, 444]]), array([[158, 468],
       ...,
       [154, 466]])], 'text_det_params': {'limit_side_len': 736, 'limit_type': 'min', 'thresh': 0.2, 'max_side_limit': 4000, 'box_thresh': 0.6, 'unclip_ratio': 0.5}, 'text_type': 'seal', 'textline_orientation_angles': array([-1, ..., -1]), 'text_rec_score_thresh': 0, 'rec_texts': ['天津君和缘商贸有限公司', '发票专用章', '吗繁物', '5263647368706'], 'rec_scores': array([0.99340463, ..., 0.9916274 ]), 'rec_polys': [array([[320,  38],
       ...,
       [315,  38]]), array([[461, 347],
       ...,
       [456, 346]]), array([[439, 445],
       ...,
       [434, 444]]), array([[158, 468],
       ...,
       [154, 466]])], 'rec_boxes': array([], dtype=float64)}]}}
The visualized results are saved under save_path, and the visualized result of seal OCR is as follows:

2.2 Python Script Integration

  • The above command line is for quickly experiencing and viewing the effect. Generally, in a project, you often need to integrate through code. You can complete the quick inference of the pipeline with just a few lines of code. The inference code is as follows:
from paddleocr import SealRecognition

pipeline = SealRecognition(
    use_doc_orientation_classify=False, # Set whether to use document orientation classification model
    use_doc_unwarping=False, # Set whether to use document image unwarping module
)
# ocr = SealRecognition(device="gpu") # Specify GPU for model inference
output = pipeline.predict("./seal_text_det.png")
for res in output:
    res.print() ## Print structured prediction results
    res.save_to_img("./output/")
    res.save_to_json("./output/")

In the above Python script, the following steps were executed:

(1) Instantiate a pipeline object for seal text recognition using the SealRecognition() class, with specific parameter descriptions as follows:

Parameter Description Type Default Value
doc_orientation_classify_model_name Name of the document orientation classification model. If set to None, the pipeline default model is used. str None
doc_orientation_classify_model_dir Directory path of the document orientation classification model. If set to None, the official model will be downloaded. str None
doc_unwarping_model_name Name of the document unwarping model. If set to None, the pipeline default model is used. str None
doc_unwarping_model_dir Directory path of the document unwarping model. If set to None, the official model will be downloaded. str None
layout_detection_model_name Name of the layout detection model. If set to None, the pipeline default model is used. str None
layout_detection_model_dir Directory path of the layout detection model. If set to None, the official model will be downloaded. str None
seal_text_detection_model_name Name of the seal text detection model. If set to None, the default model will be used. str
seal_text_detection_model_dir Directory of the seal text detection model. If set to None, the official model will be downloaded. str
text_recognition_model_name Name of the text recognition model. If set to None, the pipeline default model is used. str None
text_recognition_model_dir Directory path of the text recognition model. If set to None, the official model will be downloaded. str None
text_recognition_batch_size Batch size for the text recognition model. If set to None, the default batch size is 1. int None
use_doc_orientation_classify Whether to enable the document orientation classification module. If set to None, the default value is True. bool None
use_doc_unwarping Whether to enable the document image unwarping module. If set to None, the default value is True. bool None
use_layout_detection Whether to load and use the layout detection module. If set to None, the parameter will default to the value initialized in the pipeline, which is True. bool None
layout_threshold Score threshold for the layout model.
  • float: Any float between 0-1;
  • dict: {0:0.1} where the key is the class ID and the value is the threshold for that class;
  • None: If set to None, uses the pipeline default of 0.5.
float|dict None
layout_nms Whether to use Non-Maximum Suppression (NMS) as post-processing for layout detection. If set to None, the parameter will default to the value initialized in the pipeline, which is set to True by default. bool None
layout_unclip_ratio Expansion ratio for the bounding boxes from the layout detection model.
  • float: Any float greater than 0;
  • Tuple[float,float]: Expansion ratios in horizontal and vertical directions;
  • dict: A dictionary with int keys representing cls_id, and tuple values, e.g., {0: (1.1, 2.0)} means width is expanded 1.1× and height 2.0× for class 0 boxes;
  • None: If set to None, uses the pipeline default of 1.0.
float|Tuple[float,float]|dict None
layout_merge_bboxes_mode Filtering method for overlapping boxes in layout detection.
  • str: Options include large, small, and union to retain the larger box, smaller box, or both;
  • dict: A dictionary with int keys representing cls_id, and str values, e.g., {0: "large", 2: "small"} means using different modes for different classes;
  • None: If set to None, uses the pipeline default value large.
str|dict None
seal_det_limit_side_len Image side length limit for seal text detection.
  • int: Any integer greater than 0;
  • None: If set to None, the default value is 736.
int None
seal_det_limit_type Limit type for seal text detection image side length.
  • str: Supports min and max. min ensures the shortest side is no less than det_limit_side_len, while max ensures the longest side is no greater than limit_side_len;
  • None: If set to None, the default value is min.
str None
seal_det_thresh Pixel threshold for detection. Pixels with scores greater than this value in the probability map are considered text pixels.
  • float: Any float greater than 0;
  • None: If set to None, the default value is 0.2.
float None
seal_det_box_thresh Bounding box threshold. If the average score of all pixels inside a detection box exceeds this threshold, it is considered a text region.
  • float: Any float greater than 0;
  • None: If set to None, the default value is 0.6.
float None
seal_det_unclip_ratio Expansion ratio for seal text detection. The larger the value, the larger the expanded area.
  • float: Any float greater than 0;
  • None: If set to None, the default value is 0.5.
float None
seal_rec_score_thresh Score threshold for seal text recognition. Text results with scores above this threshold will be retained.
  • float: Any float greater than 0;
  • None: If set to None, the default value is 0.0 (no threshold).
float None
device Device used for inference. Supports specifying device ID:
  • CPU: e.g., cpu means using CPU for inference;
  • GPU: e.g., gpu:0 means using GPU 0;
  • NPU: e.g., npu:0 means using NPU 0;
  • XPU: e.g., xpu:0 means using XPU 0;
  • MLU: e.g., mlu:0 means using MLU 0;
  • DCU: e.g., dcu:0 means using DCU 0;
  • None: If set to None, the pipeline initialized value for this parameter will be used. During initialization, the local GPU device 0 will be preferred; if unavailable, the CPU device will be used.
str None
enable_hpi Whether to enable high-performance inference. bool False
use_tensorrt Whether to use the Paddle Inference TensorRT subgraph engine.
For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6.
For Paddle with CUDA version 12.6, the compatible TensorRT version is 10.x (x>=5), and it is recommended to install TensorRT 10.5.0.18.
bool False
min_subgraph_size Minimum subgraph size used to optimize model subgraph computation. int 3
precision Computation precision, e.g., fp32, fp16. str "fp32"
enable_mkldnn Whether to enable MKL-DNN acceleration for inference. If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set. bool True
cpu_threads Number of threads used for inference on CPU. int 8
paddlex_config Path to the PaddleX pipeline configuration file. str None

(2) Call the predict() method of the Seal Text Recognition pipeline object for inference prediction. This method will return a generator. Below are the parameters and their descriptions for the predict() method:

Parameter Parameter Description Parameter Type Default Value
input Input data to be predicted. Required. Supports multiple types:
  • Python Var: Image data represented by numpy.ndarray;
  • str: Local path of an image or PDF file, e.g., /root/data/img.jpg; URL link, e.g., the network URL of an image or PDF file: Example; Local directory, containing images to be predicted, e.g., /root/data/ (currently does not support prediction of PDF files in directories; PDF files must be specified with an exact file path);
  • List: Elements of the list must be of the above types, e.g., [numpy.ndarray, numpy.ndarray], [\"/root/data/img1.jpg\", \"/root/data/img2.jpg\"], [\"/root/data1\", \"/root/data2\"].
Python Var|str|list
use_doc_orientation_classify Whether to use the document orientation classification module during inference. bool None
use_doc_unwarping Whether to use the text image correction module during inference. bool None
use_layout_detection Whether to use the layout detection module during inference. bool None
layout_threshold Same as the parameter during instantiation. float|dict None
layout_nms Same as the parameter during instantiation. bool None
layout_unclip_ratio Same as the parameter during instantiation. float|Tuple[float,float]|dict None
layout_merge_bboxes_mode Same as the parameter during instantiation. str|dict None
seal_det_limit_side_len Same as the parameter during instantiation. int None
seal_det_limit_type Same as the parameter during instantiation. str None
seal_det_thresh Same as the parameter during instantiation. float None
seal_det_box_thresh Same as the parameter during instantiation. float None
seal_det_unclip_ratio Same as the parameter during instantiation. float None
seal_rec_score_thresh Same as the parameter during instantiation. float None

(3) Process the prediction results. The prediction result for each sample is of dict type and supports operations such as printing, saving as an image, and saving as a json file:

Method Description Parameter Parameter Type Parameter Description Default Value
print() Print results to the terminal format_json bool Whether to format the output content using JSON indentation. True
indent int Specify the indentation level to beautify the output JSON data for better readability, effective only when format_json is True. 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters, effective only when format_json is True. False
save_to_json() Save results as a json file save_path str The file path to save the results. When it is a directory, the saved file name will be consistent with the input file type. None
indent int Specify the indentation level to beautify the output JSON data for better readability, effective only when format_json is True. 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters, effective only when format_json is True. False
save_to_img() Save results as an image file save_path str The file path to save the results, supports directory or file path. None
  • Calling the print() method will print the results to the terminal, and the explanations of the printed content are as follows:

    • input_path: (str) The input path of the image to be predicted.

    • model_settings: (Dict[str, bool]) The model parameters required for pipeline configuration.

      • use_doc_preprocessor: (bool) Controls whether to enable the document preprocessing sub-pipeline.
      • use_layout_detection: (bool) Controls whether to enable the layout detection sub-module.
    • layout_det_res: (Dict[str, Union[List[numpy.ndarray], List[float]]]) The output result of the layout detection sub-module. Only exists when use_layout_detection=True.

      • input_path: (Union[str, None]) The image path accepted by the layout detection module. Saved as None when the input is a numpy.ndarray.
      • page_index: (Union[int, None]) Indicates the current page number of the PDF if the input is a PDF file; otherwise, it is None.
      • boxes: (List[Dict]) A list of detected layout seal regions, with each element containing the following fields:
        • cls_id: (int) The class ID of the detected seal region.
        • score: (float) The confidence score of the detected region.
        • coordinate: (List[float]) The coordinates of the four corners of the detection box, in the order of x1, y1, x2, y2, representing the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner.
    • seal_res_list: List[Dict] A list of seal text recognition results, with each element containing the following fields:

      • input_path: (Union[str, None]) The image path accepted by the seal text recognition pipeline. Saved as None when the input is a numpy.ndarray.
      • page_index: (Union[int, None]) Indicates the current page number of the PDF if the input is a PDF file; otherwise, it is None.
      • model_settings: (Dict[str, bool]) The model configuration parameters for the seal text recognition pipeline.
      • use_doc_preprocessor: (bool) Controls whether to enable the document preprocessing sub-pipeline.
      • use_textline_orientation: (bool) Controls whether to enable the text line orientation classification sub-module.
    • doc_preprocessor_res: (Dict[str, Union[str, Dict[str, bool], int]]) The output result of the document preprocessing sub-pipeline. Only exists when use_doc_preprocessor=True.

      • input_path: (Union[str, None]) The image path accepted by the document preprocessing sub-pipeline. Saved as None when the input is a numpy.ndarray.
      • model_settings: (Dict) The model configuration parameters for the preprocessing sub-pipeline.
        • use_doc_orientation_classify: (bool) Controls whether to enable document orientation classification.
        • use_doc_unwarping: (bool) Controls whether to enable document unwarping.
      • angle: (int) The predicted result of document orientation classification. When enabled, it takes values [0, 1, 2, 3], corresponding to [0°, 90°, 180°, 270°]; when disabled, it is -1.
    • dt_polys: (List[numpy.ndarray]) A list of polygon boxes for seal text detection. Each detection box is represented by a numpy array of multiple vertex coordinates, with the array shape being (n, 2).

    • dt_scores: (List[float]) A list of confidence scores for text detection boxes.

    • text_det_params: (Dict[str, Dict[str, int, float]]) Configuration parameters for the text detection module.

      • limit_side_len: (int) The side length limit value during image preprocessing.
      • limit_type: (str) The handling method for side length limits.
      • thresh: (float) The confidence threshold for text pixel classification.
      • box_thresh: (float) The confidence threshold for text detection boxes.
      • unclip_ratio: (float) The expansion ratio for text detection boxes.
      • text_type: (str) The type of seal text detection, currently fixed as "seal".
    • text_rec_score_thresh: (float) The filtering threshold for text recognition results.

    • rec_texts: (List[str]) A list of text recognition results, containing only texts with confidence scores above text_rec_score_thresh.

    • rec_scores: (List[float]) A list of confidence scores for text recognition, filtered by text_rec_score_thresh.

    • rec_polys: (List[numpy.ndarray]) A list of text detection boxes filtered by confidence score, in the same format as dt_polys.

    • rec_boxes: (numpy.ndarray) An array of rectangular bounding boxes for detection boxes; the seal recognition pipeline returns an empty array.

  • Calling the save_to_json() method will save the above content to the specified save_path. If a directory is specified, the saved path will be save_path/{your_img_basename}_res.json. If a file is specified, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, numpy.array types will be converted to list format.

  • Calling the save_to_img() method will save the visualization results to the specified save_path. If a directory is specified, the saved path will be save_path/{your_img_basename}_seal_res_region1.{your_img_extension}. If a file is specified, it will be saved directly to that file. (The pipeline usually contains multiple result images, so it is not recommended to specify a specific file path directly, as multiple images will be overwritten, and only the last image will be retained.)

  • Additionally, you can obtain visualized images with results and prediction results through attributes, as follows:

Attribute Description
json Get the prediction results in json format.
img Get the visualization results in dict format.
  • The prediction results obtained through the json attribute are of dict type, with content consistent with what is saved by calling the save_to_json() method.
  • The prediction results returned by the img attribute are of dict type. The keys are layout_det_res, seal_res_region1, and preprocessed_img, corresponding to three Image.Image objects: one for visualizing layout detection, one for visualizing seal text recognition results, and one for visualizing image preprocessing. If the image preprocessing sub-module is not used, preprocessed_img will not be included in the dictionary. If the layout region detection module is not used, layout_det_res will not be included.

3. Development Integration/Deployment

If the pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.

If you need to integrate the pipeline into your Python project, you can refer to the example code in 2.2 Python Script Method.

In addition, PaddleOCR also provides three other deployment methods, which are detailed as follows:

🚀 High-Performance Inference: In real-world production environments, many applications have stringent performance requirements for deployment strategies, especially in terms of response speed, to ensure efficient system operation and a smooth user experience. To address this, PaddleOCR offers high-performance inference capabilities aimed at deeply optimizing the performance of model inference and pre/post-processing, thereby significantly accelerating the end-to-end process. For detailed high-performance inference procedures, please refer to High-Performance Inference.

☁️ Service Deployment: Service deployment is a common form of deployment in real-world production environments. By encapsulating inference functionality into a service, clients can access these services via network requests to obtain inference results. For detailed production service deployment procedures, please refer to Serving.

Below are the API references for basic serving deployment and multi-language service invocation examples:

API Reference

For the main operations provided by the service:

  • The HTTP request method is POST.
  • The request body and response body are both JSON data (JSON objects).
  • When the request is processed successfully, the response status code is 200, and the attributes of the response body are as follows:
Name Type Description
logId string The UUID of the request.
errorCode integer Error code. Fixed as 0.
errorMsg string Error message. Fixed as "Success".
result object The result of the operation.
  • When the request is not processed successfully, the attributes of the response body are as follows:
Name Type Description
logId string The UUID of the request.
errorCode integer Error code. Same as the response status code.
errorMsg string Error message.

The main operations provided by the service are as follows:

  • infer

Obtain the seal text recognition result.

POST /seal-recognition

  • The attributes of the request body are as follows:
Name Type Description Required
file string The URL of an image or PDF file accessible by the server, or the Base64-encoded content of the file. By default, for PDF files exceeding 10 pages, only the content of the first 10 pages will be processed.
To remove the page limit, please add the following configuration to the pipeline configuration file:
Serving:
  extra:
    max_num_input_imgs: null
Yes
fileType integer | null The type of file. 0 indicates a PDF file, 1 indicates an image file. If this attribute is not present in the request body, the file type will be inferred from the URL. No
useDocOrientationClassify boolean | null Please refer to the description of the use_doc_orientation_classify parameter of the pipeline object's predict method. No
useDocUnwarping boolean | null Please refer to the description of the use_doc_unwarping parameter of the pipeline object's predict method. No
useLayoutDetection boolean | null Please refer to the description of the use_layout_detection parameter of the pipeline object's predict method. No
layoutThreshold number | null Please refer to the description of the layout_threshold parameter of the pipeline object's predict method. No
layoutNms boolean | null Please refer to the description of the layout_nms parameter of the pipeline object's predict method. No
layoutUnclipRatio number | array | null Please refer to the description of the layout_unclip_ratio parameter of the pipeline object's predict method. No
layoutMergeBboxesMode string | null Please refer to the description of the layout_merge_bboxes_mode parameter of the pipeline object's predict method. No
sealDetLimitSideLen integer | null Please refer to the description of the seal_det_limit_side_len parameter of the pipeline object's predict method. No
sealDetLimitType string | null Please refer to the description of the seal_det_limit_type parameter of the pipeline object's predict method. No
sealDetThresh number | null Please refer to the description of the seal_det_thresh parameter of the pipeline object's predict method. No
sealDetBoxThresh number | null Please refer to the description of the seal_det_box_thresh parameter of the pipeline object's predict method. No
sealDetUnclipRatio number | null Please refer to the description of the seal_det_unclip_ratio parameter of the pipeline object's predict method. No
sealRecScoreThresh number | null Please refer to the description of the seal_rec_score_thresh parameter of the pipeline object's predict method. No
  • When the request is processed successfully, the result in the response body has the following properties:
Name Type Meaning
sealRecResults object The seal text recognition result. The array length is 1 (for image input) or the actual number of document pages processed (for PDF input). For PDF input, each element in the array represents the result of each page actually processed in the PDF file.
dataInfo object Information about the input data.

Each element in sealRecResults is an object with the following properties:

Name Type Meaning
prunedResult object A simplified version of the res field in the JSON representation generated by the predict method of the production object, where the input_path and the page_index fields are removed.
outputImages object | null See the description of the img attribute of the result of the pipeline prediction. The images are in JPEG format and encoded in Base64.
inputImage string | null The input image. The image is in JPEG format and encoded in Base64.
Multi-language Service Invocation Example
Python
import base64
import requests

API_URL = "http://localhost:8080/seal-recognition"
file_path = "./demo.jpg"

with open(file_path, "rb") as file:
    file_bytes = file.read()
    file_data = base64.b64encode(file_bytes).decode("ascii")

payload = {"file": file_data, "fileType": 1}

response = requests.post(API_URL, json=payload)

assert response.status_code == 200
result = response.json()["result"]
for i, res in enumerate(result["sealRecResults"]):
    print(res["prunedResult"])
    for img_name, img in res["outputImages"].items():
        img_path = f"{img_name}_{i}.jpg"
        with open(img_path, "wb") as f:
            f.write(base64.b64decode(img))
        print(f"Output image saved at {img_path}")


4. Custom Development

If the default model weights provided by the seal text recognition pipeline do not meet your requirements in terms of accuracy or speed, you can try to fine-tune the existing models using your own domain-specific or application data to improve the recognition performance of the seal text recognition pipeline in your scenario.

4.1 Model Fine-Tuning

Since the seal text recognition pipeline consists of several modules, if the pipeline's performance does not meet expectations, the issue may arise from any one of these modules. You can analyze images with poor recognition results to identify which module is problematic and refer to the corresponding fine-tuning tutorial links in the table below for model fine-tuning.

Scenario Fine-Tuning Module Fine-Tuning Reference Link
Inaccurate or missing seal position detection Layout Detection Module Link
Missing text detection Text Detection Module Link
Inaccurate text content Text Recognition Module Link
Inaccurate full-image rotation correction Document Image Orientation Classification Module Link
Inaccurate image distortion correction Text Image Correction Module Not supported for fine-tuning

4.2 Model Application

After you complete the fine-tuning training with a private dataset, you can obtain the local model weight files. You can then use the fine-tuned model weights by specifying the local model save path through parameters or by using a custom pipeline configuration file.

4.2.1 Specify Local Model Path via Parameters

When initializing the pipeline object, specify the local model path through parameters. Taking the usage of fine-tuned weights for a text detection model as an example, the demonstration is as follows:

Command line method:

# Specify the local model path through --doc_orientation_classify_model_dir
paddleocr seal_recognition -i ./seal_text_det.png --doc_orientation_classify_model_dir your_orientation_classify_model_path

# By default, the PP-LCNet_x1_0_doc_ori model is used as the default text detection model. If the fine-tuned model is not this one, modify the model name with --text_detection_model_name
paddleocr seal_recognition -i ./seal_text_det.png --doc_orientation_classify_model_name PP-LCNet_x1_0_doc_ori --doc_orientation_classify_model_dir your_orientation_classify_model_path

Script method:

from paddleocr import SealRecognition

# Specify the local model path through doc_orientation_classify_model_dir
pipeline = SealRecognition(doc_orientation_classify_model_dir ="./your_orientation_classify_model_path")

# By default, the PP-LCNet_x1_0_doc_ori model is used as the default text detection model. If the fine-tuned model is not this one, modify the model name with doc_orientation_classify_model_name
# pipeline = SealRecognition(doc_orientation_classify_model_name="PP-LCNet_x1_0_doc_ori", doc_orientation_classify_model_dir="./your_orientation_classify_model_path")

4.2.2 Specify Local Model Path via Configuration File

  1. Obtain pipeline Configuration File

You can call the export_paddlex_config_to_yaml method of the general OCR pipeline object in PaddleOCR to export the current pipeline configuration to a YAML file:

from paddleocr import SealRecognition

pipeline = SealRecognition()
pipeline.export_paddlex_config_to_yaml("SealRecognition.yaml")
  1. Modify Configuration File

After obtaining the default pipeline configuration file, replace the local path of the fine-tuned model weights in the corresponding position of the pipeline configuration file. For example:

......
SubPipelines:
  DocPreprocessor:
    SubModules:
      DocOrientationClassify:
        model_dir: null  # Replace with the path of the fine-tuned document orientation classification model weights
        model_name: PP-LCNet_x1_0_doc_ori # If the name of the fine-tuned model is different from the default model name, please also modify here
        module_name: doc_text_orientation
      DocUnwarping:
        model_dir: null  # Replace with the path of the fine-tuned document unwarping model weights
        model_name: UVDoc # If the name of the fine-tuned model is different from the default model name, please also modify here
        module_name: image_unwarping
    pipeline_name: doc_preprocessor
    use_doc_orientation_classify: true
    use_doc_unwarping: true
......

The pipeline configuration file not only contains the parameters supported by the SealRecognition CLI and Python API but also allows for more advanced configurations. Detailed information can be found in the PaddleX Model pipeline Usage Overview, where you can find the corresponding pipeline usage tutorial and adjust various configurations as needed.

  1. Load pipeline Configuration File in CLI

After modifying the configuration file, specify the path of the modified pipeline configuration file using the --paddlex_config parameter in the command line. PaddleOCR will read its contents as the pipeline configuration. Example:

paddleocr seal_recognition --paddlex_config SealRecognition.yaml ...
  1. Load pipeline Configuration File in Python API

When initializing the pipeline object, you can pass the PaddleX pipeline configuration file path or configuration dictionary through the paddlex_config parameter. PaddleOCR will read its contents as the pipeline configuration. Example:

from paddleocr import SealRecognition

pipeline = SealRecognition(paddlex_config="SealRecognition.yaml")

Comments