Skip to content

General Table Recognition v2 Production Line User Guide

1. Introduction to General Table Recognition v2 Production Line

Table recognition is a technology that automatically identifies and extracts table content and its structure from documents or images. It is widely used in fields such as data entry, information retrieval, and document analysis. By using computer vision and machine learning algorithms, table recognition can convert complex table information into an editable format, making it easier for users to further process and analyze data.

The General Table Recognition v2 Production Line (PP-TableMagic) is designed to tackle table recognition tasks, identifying tables in images and outputting them in HTML format. Unlike the original General Table Recognition Production Line, this version introduces two new modules: table classification and table cell detection. By adopting a multi-model pipeline combining "table classification + table structure recognition + cell detection", it achieves better end-to-end table recognition performance compared to the previous version. Based on this, the General Table Recognition v2 Production Line natively supports targeted model fine-tuning, allowing developers to customize it to varying degrees for satisfactory performance in different application scenarios. Furthermore, the General Table Recognition v2 Production Line also supports end-to-end table structure recognition models (e.g., SLANet, SLANet_plus, etc.) and allows independent configuration for wired and wireless table recognition methods, enabling developers to freely select and combine the best table recognition solutions.

This production line is applicable in a variety of fields, including general, manufacturing, finance, and transportation. It also provides flexible service deployment options, supporting multiple programming languages on various hardware. Additionally, it offers capabilities for secondary development, allowing you to train and fine-tune your own datasets based on this production line, with the trained models seamlessly integrated.

The General Table Recognition Production Line v2 includes the following 8 modules. Each module can be trained and inferred independently and contains multiple models. For detailed information, please click on the corresponding module to view the documentation.

In this production line, you can choose the models to use based on the benchmark data below.

Table Structure Recognition Module Models:
ModelModel Download Link Accuracy (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
SLANetInference Model/Training Model 59.52 103.08 / 103.08 197.99 / 197.99 6.9 M SLANet is a table structure recognition model developed by Baidu PaddlePaddle's vision team. This model significantly improves the accuracy and inference speed of table structure recognition by using a CPU-friendly lightweight backbone network PP-LCNet, a high-low feature fusion module CSP-PAN, and a feature decoding module SLA Head that aligns structure and location information.
SLANet_plusInference Model/Training Model 63.69 140.29 / 140.29 195.39 / 195.39 6.9 M SLANet_plus is an enhanced version of the SLANet table structure recognition model developed by Baidu PaddlePaddle's vision team. Compared to SLANet, SLANet_plus significantly improves the recognition capabilities for wireless tables and complex tables, while reducing the model's sensitivity to table positioning accuracy, allowing for accurate recognition even if the table is slightly misaligned.
SLANeXt_wired Inference Model/Training Model 69.65 -- -- 351M The SLANeXt series is a new generation of table structure recognition models developed by Baidu PaddlePaddle's vision team. Compared to SLANet and SLANet_plus, SLANeXt focuses on recognizing table structures and has been trained with dedicated weights for recognizing wired and wireless tables, significantly enhancing recognition capabilities across both types, especially for wired tables.
SLANeXt_wireless Inference Model/Training Model
Table Classification Module Models:
ModelModel Download Link Top1 Acc (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M)
PP-LCNet_x1_0_table_clsInference Model/Training Model 94.2 2.35 / 0.47 4.03 / 1.35 6.6M
Table Cell Detection Module Models:
ModelModel Download Link mAP (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
RT-DETR-L_wired_table_cell_det Inference Model/Training Model 82.7 35.00 / 10.45 495.51 / 495.51 124M RT-DETR is the first real-time end-to-end object detection model. The Baidu PaddlePaddle vision team based RT-DETR-L as the base model, completing pre-training on a self-built table cell detection dataset, achieving good performance in detecting both wired and wireless table cells.
RT-DETR-L_wireless_table_cell_det Inference Model/Training Model
Text Detection Module Models:
ModelModel Download Link Detection Hmean (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PP-OCRv5_server_detInference Model/Training Model 83.8 89.55 / 70.19 371.65 / 371.65 84.3 PP-OCRv5 server-side text detection model with higher accuracy, suitable for deployment on high-performance servers
PP-OCRv5_mobile_detInference Model/Training Model 79.0 8.79 / 3.13 51.00 / 28.58 4.7 PP-OCRv5 mobile-side text detection model with higher efficiency, suitable for deployment on edge devices
PP-OCRv4_server_detInference Model/Training Model 69.2 83.34 / 80.91 442.58 / 442.58 109 PP-OCRv4 server-side text detection model with higher accuracy, suitable for deployment on high-performance servers
PP-OCRv4_mobile_detInference Model/Training Model 63.8 8.79 / 3.13 51.00 / 28.58 4.7 PP-OCRv4 mobile-side text detection model with higher efficiency, suitable for deployment on edge devices
Text Recognition Module:
ModelModel Download Links Recognition Avg Accuracy(%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-OCRv5_server_recInference Model/Pretrained Model 86.38 8.45/2.36 122.69/122.69 81 M PP-OCRv5_rec is a next-generation text recognition model. It aims to efficiently and accurately support the recognition of four major languages—Simplified Chinese, Traditional Chinese, English, and Japanese—as well as complex text scenarios such as handwriting, vertical text, pinyin, and rare characters using a single model. While maintaining recognition performance, it balances inference speed and model robustness, providing efficient and accurate technical support for document understanding in various scenarios.
PP-OCRv5_mobile_recInference Model/Pretrained Model 81.29 1.46/5.43 5.32/91.79 16 M
PP-OCRv4_server_rec_docInference Model/Pretrained Model 86.58 6.65 / 2.38 32.92 / 32.92 91 M PP-OCRv4_server_rec_doc is trained on a mixed dataset of more Chinese document data and PP-OCR training data, building upon PP-OCRv4_server_rec. It enhances the recognition capabilities for some Traditional Chinese characters, Japanese characters, and special symbols, supporting over 15,000 characters. In addition to improving document-related text recognition, it also enhances general text recognition capabilities.
PP-OCRv4_mobile_recInference Model/Pretrained Model 83.28 4.82 / 1.20 16.74 / 4.64 11 M A lightweight recognition model of PP-OCRv4 with high inference efficiency, suitable for deployment on various hardware devices, including edge devices.
PP-OCRv4_server_rec Inference Model/Pretrained Model 85.19 6.58 / 2.43 33.17 / 33.17 87 M The server-side model of PP-OCRv4, offering high inference accuracy and deployable on various servers.
en_PP-OCRv4_mobile_recInference Model/Pretrained Model 70.39 4.81 / 0.75 16.10 / 5.31 7.3 M An ultra-lightweight English recognition model trained based on the PP-OCRv4 recognition model, supporting English and numeric character recognition.
> ❗ The above section lists the **6 core models** that are primarily supported by the text recognition module. In total, the module supports **20 comprehensive models**, including multiple multilingual text recognition models. Below is the complete list of models:
👉Details of the Model List * PP-OCRv5 Multi-Scenario Models
ModelModel Download Links Avg Accuracy for Chinese Recognition (%) Avg Accuracy for English Recognition (%) Avg Accuracy for Traditional Chinese Recognition (%) Avg Accuracy for Japanese Recognition (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (M) Introduction
PP-OCRv5_server_recInference Model/Pretrained Model 86.38 64.70 93.29 60.35 8.45/2.36 122.69/122.69 81 M PP-OCRv5_rec is a next-generation text recognition model. It aims to efficiently and accurately support the recognition of four major languages—Simplified Chinese, Traditional Chinese, English, and Japanese—as well as complex text scenarios such as handwriting, vertical text, pinyin, and rare characters using a single model. While maintaining recognition performance, it balances inference speed and model robustness, providing efficient and accurate technical support for document understanding in various scenarios.
PP-OCRv5_mobile_recInference Model/Pretrained Model 81.29 66.00 83.55 54.65 1.46/5.43 5.32/91.79 16 M
* Chinese Recognition Models
ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PP-OCRv4_server_rec_docInference Model/Training Model 86.58 6.65 / 2.38 32.92 / 32.92 181 M PP-OCRv4_server_rec_doc is based on PP-OCRv4_server_rec, trained with a mix of more Chinese document data and PP-OCR training data, increasing the recognition capabilities for some Traditional Chinese, Japanese, and special characters, supporting recognition of over 15,000 characters. In addition to improving the document-related text recognition capabilities, it also enhances general text recognition capabilities.
PP-OCRv4_mobile_recInference Model/Training Model 83.28 4.82 / 1.20 16.74 / 4.64 88 M PP-OCRv4's lightweight recognition model has high inference efficiency and can be deployed on various hardware, including edge devices.
PP-OCRv4_server_rec Inference Model/Training Model 85.19 6.58 / 2.43 33.17 / 33.17 151 M PP-OCRv4's server-side model has high inference accuracy and can be deployed on various servers.
PP-OCRv3_mobile_recInference Model/Training Model 75.43 5.87 / 1.19 9.07 / 4.28 138 M PP-OCRv3's lightweight recognition model has high inference efficiency and can be deployed on various hardware, including edge devices.
ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
ch_SVTRv2_recInference Model/Training Model 68.81 8.08 / 2.74 50.17 / 42.50 126 M SVTRv2 is a server-side text recognition model developed by the OpenOCR team at Fudan University's Vision and Learning Laboratory (FVL). It won first place in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task, achieving a 6% improvement in end-to-end recognition accuracy compared to PP-OCRv4.
ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
ch_RepSVTR_recInference Model/Training Model 65.07 5.93 / 1.62 20.73 / 7.32 70 M RepSVTR is a mobile text recognition model based on SVTRv2, which won first place in the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task, achieving a 2.5% improvement in end-to-end recognition accuracy compared to PP-OCRv4, while maintaining the same inference speed.
* English Recognition Models
ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
en_PP-OCRv4_mobile_recInference Model/Training Model 70.39 4.81 / 0.75 16.10 / 5.31 66 M This ultra-lightweight English recognition model is trained based on the PP-OCRv4 recognition model, supporting English and digit recognition.
en_PP-OCRv3_mobile_recInference Model/Training Model 70.69 5.44 / 0.75 8.65 / 5.57 85 M This ultra-lightweight English recognition model is trained based on the PP-OCRv3 recognition model, supporting English and digit recognition.
* Multilingual Recognition Models
ModelModel Download Link Recognition Avg Accuracy (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
korean_PP-OCRv3_mobile_recInference Model/Training Model 60.21 5.40 / 0.97 9.11 / 4.05 114 M This ultra-lightweight Korean recognition model is trained based on the PP-OCRv3 recognition model, supporting Korean and digit recognition.
japan_PP-OCRv3_mobile_recInference Model/Training Model 45.69 5.70 / 1.02 8.48 / 4.07 120 M This ultra-lightweight Japanese recognition model is trained based on the PP-OCRv3 recognition model, supporting Japanese and digit recognition.
chinese_cht_PP-OCRv3_mobile_recInference Model/Training Model 82.06 5.90 / 1.28 9.28 / 4.34 152 M This ultra-lightweight Traditional Chinese recognition model is trained based on the PP-OCRv3 recognition model, supporting Traditional Chinese and digit recognition.
te_PP-OCRv3_mobile_recInference Model/Training Model 95.88 5.42 / 0.82 8.10 / 6.91 85 M This ultra-lightweight Telugu recognition model is trained based on the PP-OCRv3 recognition model, supporting Telugu and digit recognition.
ka_PP-OCRv3_mobile_recInference Model/Training Model 96.96 5.25 / 0.79 9.09 / 3.86 85 M This ultra-lightweight Kannada recognition model is trained based on the PP-OCRv3 recognition model, supporting Kannada and digit recognition.
ta_PP-OCRv3_mobile_recInference Model/Training Model 76.83 5.23 / 0.75 10.13 / 4.30 85 M This ultra-lightweight Tamil recognition model is trained based on the PP-OCRv3 recognition model, supporting Tamil and digit recognition.
latin_PP-OCRv3_mobile_recInference Model/Training Model 76.93 5.20 / 0.79 8.83 / 7.15 85 M This ultra-lightweight Latin recognition model is trained based on the PP-OCRv3 recognition model, supporting Latin and digit recognition.
arabic_PP-OCRv3_mobile_recInference Model/Training Model 73.55 5.35 / 0.79 8.80 / 4.56 85 M This ultra-lightweight Arabic alphabet recognition model is trained based on the PP-OCRv3 recognition model, supporting Arabic letters and digit recognition.
cyrillic_PP-OCRv3_mobile_recInference Model/Training Model 94.28 5.23 / 0.76 8.89 / 3.88 85 M This ultra-lightweight Slavic alphabet recognition model is trained based on the PP-OCRv3 recognition model, supporting Slavic letters and digit recognition.
devanagari_PP-OCRv3_mobile_recInference Model/Training Model 96.44 5.22 / 0.79 8.56 / 4.06 85 M This ultra-lightweight Devanagari alphabet recognition model is trained based on the PP-OCRv3 recognition model, supporting Devanagari letters and digit recognition.
Layout Region Detection Module Models:
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PP-DocLayout_plus-LInference Model/Training Model 83.2 34.6244 / 10.3945 510.57 / - 126.01 M This higher-precision layout region localization model is trained based on RT-DETR-L on a self-built dataset that includes Chinese and English papers, multi-column magazines, newspapers, PPTs, contracts, books, exam papers, research reports, ancient texts, Japanese documents, and vertical text documents.
PP-DocLayout-LInference Model/Training Model 90.4 34.6244 / 10.3945 510.57 / - 123.76 M This high-precision layout region localization model is trained based on RT-DETR-L on a self-built dataset that includes Chinese and English papers, magazines, contracts, books, exam papers, and research reports.
PP-DocLayout-MInference Model/Training Model 75.2 13.3259 / 4.8685 44.0680 / 44.0680 22.578 This layout region localization model balances accuracy and efficiency, trained based on PicoDet-L on a self-built dataset that includes Chinese and English papers, magazines, contracts, books, exam papers, and research reports.
PP-DocLayout-SInference Model/Training Model 70.9 8.3008 / 2.3794 10.0623 / 9.9296 4.834 This highly efficient layout region localization model is trained based on PicoDet-S on a self-built dataset that includes Chinese and English papers, magazines, contracts, books, exam papers, and research reports.
> ❗ The above lists the 4 core models that are key to the layout detection module. The module supports a total of 12 complete models, including multiple pre-defined models for different categories. The complete model list is as follows:
👉 Model List Details * Table Layout Detection Models
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PicoDet_layout_1x_tableInference Model/Training Model 97.5 8.02 / 3.09 23.70 / 20.41 7.4 M This high-efficiency layout region localization model is trained based on PicoDet-1x on a self-built dataset, capable of locating tables as one type of region.
* 3-Class Layout Detection Models, Including Tables, Images, and Stamps
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PicoDet-S_layout_3clsInference Model/Training Model 88.2 8.99 / 2.22 16.11 / 8.73 4.8 This high-efficiency layout region localization model is trained using a self-built dataset of Chinese and English papers, magazines, and research reports based on the lightweight model PicoDet-S.
PicoDet-L_layout_3clsInference Model/Training Model 89.0 13.05 / 4.50 41.30 / 41.30 22.6 This layout region localization model balances efficiency and accuracy, trained using a self-built dataset of Chinese and English papers, magazines, and research reports based on the model PicoDet-L.
RT-DETR-H_layout_3clsInference Model/Training Model 95.8 114.93 / 27.71 947.56 / 947.56 470.1 This high-precision layout region localization model is trained using a self-built dataset of Chinese and English papers, magazines, and research reports based on the model RT-DETR-H.
* 5-Class English Document Region Detection Models, Including Text, Titles, Tables, Images, and Lists
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PicoDet_layout_1xInference Model/Training Model 97.8 9.03 / 3.10 25.82 / 20.70 7.4 This high-efficiency English document layout region localization model is trained on the PubLayNet dataset.
* 17-Class Region Detection Models, Including 17 Common Layout Categories: Title, Image, Text, Number, Abstract, Content, Chart Title, Formula, Table, Table Title, References, Document Title, Footnote, Header, Algorithm, Footer, and Stamp
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PicoDet-S_layout_17clsInference Model/Training Model 87.4 9.11 / 2.12 15.42 / 9.12 4.8 This high-efficiency layout region localization model is trained using a self-built dataset of Chinese and English papers, magazines, and research reports based on the lightweight model PicoDet-S.
PicoDet-L_layout_17clsInference Model/Training Model 89.0 13.50 / 4.69 43.32 / 43.32 22.6 This layout region localization model balances efficiency and accuracy, trained using a self-built dataset of Chinese and English papers, magazines, and research reports based on the model PicoDet-L.
RT-DETR-H_layout_17clsInference Model/Training Model 98.3 115.29 / 104.09 995.27 / 995.27 470.2 This high-precision layout region localization model is trained using a self-built dataset of Chinese and English papers, magazines, and research reports based on the model RT-DETR-H.
Text Image Unwarping Module Models (Optional):
ModelModel Download Link MS-SSIM (%) Model Size (M) Description
UVDocInference Model/Training Model 54.40 30.3 M High-precision text image correction model.
Document Image Orientation Classification Module Models (Optional):
ModelModel Download Link Top-1 Acc (%) GPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
CPU Inference Time (ms)
[Regular Mode / High-Performance Mode]
Model Size (M) Description
PP-LCNet_x1_0_doc_oriInference Model/Training Model 99.06 2.31 / 0.43 3.37 / 1.27 7 Based on PP-LCNet_x1_0, this document image classification model includes four categories: 0 degrees, 90 degrees, 180 degrees, and 270 degrees.
Testing Environment Information:
  • Performance Testing Environment
    • Test Dataset:
      • Document Image Orientation Classification Model: A self-built dataset by PaddleX, covering multiple scenarios including identification documents and documents, containing 1,000 images.
      • Layout Region Detection Model: A self-built layout region detection dataset by PaddleOCR, containing 500 common document type images, including Chinese and English papers, magazines, contracts, books, exam papers, and research reports.
      • Table Layout Detection Model: A self-built layout table region detection dataset by PaddleOCR, containing 7,835 images with tables in Chinese and English papers.
      • 3-Class Layout Detection Model: A self-built layout region detection dataset by PaddleOCR, containing 1,154 common document type images including Chinese and English papers, magazines, and research reports.
      • 5-Class English Document Region Detection Model: The evaluation dataset from PubLayNet, containing 11,245 images of English documents.
      • 17-Class Region Detection Model: A self-built layout region detection dataset by PaddleOCR, containing 892 common document type images including Chinese and English papers, magazines, and research reports.
      • Table Structure Recognition Model: A high-difficulty Chinese table recognition dataset built internally by PaddleX.
      • Table Cell Detection Model: An evaluation set built internally by PaddleX.
      • Table Classification Model: An evaluation set built internally by PaddleX.
      • Text Detection Model: A Chinese dataset built by PaddleOCR, covering multiple scenarios including street scenes, web images, documents, and handwriting, with 500 images for detection.
      • Chinese Recognition Model: A Chinese dataset built by PaddleOCR, covering multiple scenarios including street scenes, web images, documents, and handwriting, with 11,000 images for text recognition.
      • ch_SVTRv2_rec: The evaluation set from PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task A-list.
      • ch_RepSVTR_rec: The evaluation set from PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition Task B-list.
      • English Recognition Model: An English dataset built internally by PaddleX.
      • Multilingual Recognition Model: A multilingual dataset built internally by PaddleX.
    • Hardware Configuration:
      • GPU: NVIDIA Tesla T4
      • CPU: Intel Xeon Gold 6271C @ 2.60GHz
      • Other Environment: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
  • Inference Mode Information
Mode GPU Configuration CPU Configuration Acceleration Technology Combination
Regular Mode FP32 Precision / No TRT Acceleration FP32 Precision / 8 Threads PaddleInference
High-Performance Mode Optimal combination of prior precision type and acceleration strategy FP32 Precision / 8 Threads Optimal backend (Paddle/OpenVINO/TRT, etc.) selected based on prior knowledge


If you prioritize model accuracy, please choose models with higher accuracy; if you care more about inference speed, please select models with faster inference speeds; if you focus on model storage size, please choose models with smaller storage volumes.

2. Quick Start

Before using the table structure recognition V2 pipeline locally, please ensure that you have completed the installation of the wheel package according to the installation guide. After installation, you can experience it locally using the command line or Python integration.

2.1 Command Line Experience

A single command allows you to quickly experience the effects of the table_recognition_v2 pipeline:

paddleocr table_recognition_v2 -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition_v2.jpg

# Specify whether to use the document orientation classification model with --use_doc_orientation_classify
paddleocr table_recognition_v2 -i ./general_formula_recognition_001.png --use_doc_orientation_classify True

# Specify whether to use the text image unwarping module with --use_doc_unwarping
paddleocr table_recognition_v2 -i ./general_formula_recognition_001.png --use_doc_unwarping True

# Specify the device to use GPU for model inference with --device
paddleocr table_recognition_v2 -i ./general_formula_recognition_001.png --device gpu
More command line parameters are supported. Click to expand for detailed descriptions of the command line parameters
Parameter Description Type Default Value
input Data to be predicted, supports multiple input types, required.
  • Python Var: For example, image data represented as numpy.ndarray.
  • str: Local path to image files or PDF files: /root/data/img.jpg; as URL links, such as network URLs for image files or PDF files: example; as local directories, the directory must contain images to be predicted, such as local path: /root/data/ (currently, predictions do not support directories that contain PDF files; the PDF file must be specified to the specific file path).
  • List: The elements of the list must be of the above types, such as [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"].
Python Var|str|list
save_path Specify the path to save the inference result file. If set to None, the inference result will not be saved locally. str None
layout_detection_model_name Name of the layout detection model. If set to None, the default model of the pipeline will be used. str None
layout_detection_model_dir Directory path of the layout detection model. If set to None, the official model will be downloaded. str None
table_classification_model_name Name of the table classification model. If set to None, the default model of the pipeline will be used. str None
table_classification_model_dir Directory path of the table classification model. If set to None, the official model will be downloaded. str None
wired_table_structure_recognition_model_name Name of the wired table structure recognition model. If set to None, the default model of the pipeline will be used. str None
wired_table_structure_recognition_model_dir Directory path of the wired table structure recognition model. If set to None, the official model will be downloaded. str None
wireless_table_structure_recognition_model_name Name of the wireless table structure recognition model. If set to None, the default model of the pipeline will be used. str None
wireless_table_structure_recognition_model_dir Directory path of the wireless table structure recognition model. If set to None, the official model will be downloaded. str None
wired_table_cells_detection_model_name Name of the wired table cell detection model. If set to None, the default model of the pipeline will be used. str None
wired_table_cells_detection_model_dir Directory path of the wired table cell detection model. If set to None, the official model will be downloaded. str None
wireless_table_cells_detection_model_name Name of the wireless table cell detection model. If set to None, the default model of the pipeline will be used. str None
wireless_table_cells_detection_model_dir Directory path of the wireless table cell detection model. If set to None, the official model will be downloaded. str None
doc_orientation_classify_model_name Name of the document orientation classification model. If set to None, the default model of the pipeline will be used. str None
doc_orientation_classify_model_dir Directory path of the document orientation classification model. If set to None, the official model will be downloaded. str None
doc_unwarping_model_name Name of the text image unwarping model. If set to None, the default model of the pipeline will be used. str None
doc_unwarping_model_dir Directory path of the text image unwarping model. If set to None, the official model will be downloaded. str None
text_detection_model_name Name of the text detection model. If set to None, the default model of the pipeline will be used. str None
text_detection_model_dir Directory path of the text detection model. If set to None, the official model will be downloaded. str None
text_det_limit_side_len Image side length limit for text detection.
  • int: Any integer greater than 0;
  • None: If set to None, the default value initialized by the pipeline will be used, initialized to 960;
int None
text_det_limit_type Type of the image side length limit for text detection.
  • str: Supports min and max. min ensures that the shortest side of the image is not less than det_limit_side_len, while max ensures that the longest side of the image is not greater than limit_side_len.
  • None: If set to None, the default value initialized by the pipeline will be used, initialized to max;
str None
text_det_thresh Detection pixel threshold. In the output probability map, only pixels with a score greater than this threshold will be considered text pixels.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 0.3.
float None
text_det_box_thresh Detection box threshold. When the average score of all pixels within the detection result box is greater than this threshold, the result is considered a text area.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 0.6.
float None
text_det_unclip_ratio Text detection expansion coefficient. This method expands the text area; the larger this value, the larger the expanded area.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 2.0.
float None
text_recognition_model_name Name of the text recognition model. If set to None, the default model of the pipeline will be used. str None
text_recognition_model_dir Directory path of the text recognition model. If set to None, the official model will be downloaded. str None
text_recognition_batch_size Batch size for the text recognition model. If set to None, the default batch size will be set to 1. int None
text_rec_score_thresh Text recognition threshold. Text results with a score greater than this threshold will be retained.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 0.0. That is, no threshold is set.
float None
use_doc_orientation_classify Whether to load the document orientation classification module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
use_doc_unwarping Whether to load the text image unwarping module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
use_layout_detection Whether to load the layout detection module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
use_ocr_model Whether to load the OCR module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
device The device used for inference. Supports specifying a specific card number.
  • CPU: For example, cpu indicates using CPU for inference;
  • GPU: For example, gpu:0 indicates using the first GPU for inference;
  • NPU: For example, npu:0 indicates using the first NPU for inference;
  • XPU: For example, xpu:0 indicates using the first XPU for inference;
  • MLU: For example, mlu:0 indicates using the first MLU for inference;
  • DCU: For example, dcu:0 indicates using the first DCU for inference;
  • None: If set to None, the default value initialized by the pipeline will be used, which prioritizes using the local GPU device 0; if not available, it will use the CPU device.
str None
enable_hpi Whether to enable high-performance inference. bool False
use_tensorrt Whether to use TensorRT for inference acceleration. bool False
min_subgraph_size Minimum subgraph size for optimizing model subgraph computation. int 3
precision Computation precision, such as fp32, fp16. str fp32
enable_mkldnn Whether to enable the MKL-DNN acceleration library. If set to None, it will be enabled by default. bool None
cpu_threads Number of threads to use for inference on the CPU. int 8
paddlex_config Path to PaddleX pipeline configuration file. str None


The running results will be printed to the terminal. The default configuration of the table_recognition_v2 pipeline's running results is as follows:

{'res': {'input_path': '/root/.paddlex/predict_input/table_recognition_v2.jpg', 'page_index': None, 'model_settings': {'use_doc_preprocessor': True, 'use_layout_detection': True, 'use_ocr_model': True}, 'doc_preprocessor_res': {'input_path': None, 'page_index': None, 'model_settings': {'use_doc_orientation_classify': True, 'use_doc_unwarping': True}, 'angle': 180}, 'layout_det_res': {'input_path': None, 'page_index': None, 'boxes': [{'cls_id': 18, 'label': 'chart', 'score': 0.6778535842895508, 'coordinate': [0, 0, 1281.0206, 585.5999]}]}, 'overall_ocr_res': {'input_path': None, 'page_index': None, 'model_settings': {'use_doc_preprocessor': False, 'use_textline_orientation': False}, 'dt_polys': array([[[  4, 301],
        ...,
        [  4, 334]]], dtype=int16), 'text_det_params': {'limit_side_len': 960, 'limit_type': 'max', 'thresh': 0.3, 'box_thresh': 0.4, 'unclip_ratio': 2.0}, 'text_type': 'general', 'textline_orientation_angles': array([-1]), 'text_rec_score_thresh': 0, 'rec_texts': ['其'], 'rec_scores': array([0.97335929]), 'rec_polys': array([[[  4, 301],
        ...,
        [  4, 334]]], dtype=int16), 'rec_boxes': array([[  4, ..., 334]], dtype=int16)}, 'table_res_list': []}}

The visualization results are saved under save_path, and the visualization results are as follows:

2.2 Python Script Integration

The command line method is designed for quick experience and viewing effects. Generally, in a project, it is often necessary to integrate through code. You can complete quick inference of the pipeline with just a few lines of code. The inference code is as follows:

from paddleocr import TableRecognitionPipelineV2

pipeline = TableRecognitionPipelineV2()
# ocr = TableRecognitionPipelineV2(use_doc_orientation_classify=True) # Specify whether to use the document orientation classification model with use_doc_orientation_classify
# ocr = TableRecognitionPipelineV2(use_doc_unwarping=True) # Specify whether to use the text image unwarping module with use_doc_unwarping
# ocr = TableRecognitionPipelineV2(device="gpu") # Specify the device to use GPU for model inference
output = pipeline.predict("./general_formula_recognition_001.png")
for res in output:
    res.print() ## Print the predicted structured output
    res.save_to_img("./output/")
    res.save_to_xlsx("./output/")
    res.save_to_html("./output/")
    res.save_to_json("./output/")

In the above Python script, the following steps are performed:

(1) Instantiate the general table recognition V2 pipeline object using TableRecognitionPipelineV2(). The specific parameter descriptions are as follows:

Parameter Description Type Default Value
layout_detection_model_name Name of the layout detection model. If set to None, the default model of the pipeline will be used. str None
layout_detection_model_dir Directory path of the layout detection model. If set to None, the official model will be downloaded. str None
table_classification_model_name Name of the table classification model. If set to None, the default model of the pipeline will be used. str None
table_classification_model_dir Directory path of the table classification model. If set to None, the official model will be downloaded. str None
wired_table_structure_recognition_model_name Name of the wired table structure recognition model. If set to None, the default model of the pipeline will be used. str None
wired_table_structure_recognition_model_dir Directory path of the wired table structure recognition model. If set to None, the official model will be downloaded. str None
wireless_table_structure_recognition_model_name Name of the wireless table structure recognition model. If set to None, the default model of the pipeline will be used. str None
wireless_table_structure_recognition_model_dir Directory path of the wireless table structure recognition model. If set to None, the official model will be downloaded. str None
wired_table_cells_detection_model_name Name of the wired table cell detection model. If set to None, the default model of the pipeline will be used. str None
wired_table_cells_detection_model_dir Directory path of the wired table cell detection model. If set to None, the official model will be downloaded. str None
wireless_table_cells_detection_model_name Name of the wireless table cell detection model. If set to None, the default model of the pipeline will be used. str None
wireless_table_cells_detection_model_dir Directory path of the wireless table cell detection model. If set to None, the official model will be downloaded. str None
doc_orientation_classify_model_name Name of the document orientation classification model. If set to None, the default model of the pipeline will be used. str None
doc_orientation_classify_model_dir Directory path of the document orientation classification model. If set to None, the official model will be downloaded. str None
doc_unwarping_model_name Name of the text image unwarping model. If set to None, the default model of the pipeline will be used. str None
doc_unwarping_model_dir Directory path of the text image unwarping model. If set to None, the official model will be downloaded. str None
text_detection_model_name Name of the text detection model. If set to None, the default model of the pipeline will be used. str None
text_detection_model_dir Directory path of the text detection model. If set to None, the official model will be downloaded. str None
text_det_limit_side_len Image side length limit for text detection.
  • int: Any integer greater than 0;
  • None: If set to None, the default value initialized by the pipeline will be used, initialized to 960;
int None
text_det_limit_type Type of the image side length limit for text detection.
  • str: Supports min and max. min ensures that the shortest side of the image is not less than det_limit_side_len, while max ensures that the longest side of the image is not greater than limit_side_len.
  • None: If set to None, the default value initialized by the pipeline will be used, initialized to max;
str None
text_det_thresh Detection pixel threshold. In the output probability map, only pixels with a score greater than this threshold will be considered text pixels.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 0.3.
float None
text_det_box_thresh Detection box threshold. When the average score of all pixels within the detection result box is greater than this threshold, the result is considered a text area.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 0.6.
float None
text_det_unclip_ratio Text detection expansion coefficient. This method expands the text area; the larger this value, the larger the expanded area.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 2.0.
float None
text_recognition_model_name Name of the text recognition model. If set to None, the default model of the pipeline will be used. str None
text_recognition_model_dir Directory path of the text recognition model. If set to None, the official model will be downloaded. str None
text_recognition_batch_size Batch size for the text recognition model. If set to None, the default batch size will be set to 1. int None
text_rec_score_thresh Text recognition threshold. Text results with a score greater than this threshold will be retained.
  • float: Any floating-point number greater than 0.
  • None: If set to None, the default value initialized by the pipeline will be used, which is 0.0. That is, no threshold is set.
float None
use_doc_orientation_classify Whether to load the document orientation classification module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
use_doc_unwarping Whether to load the text image unwarping module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
use_layout_detection Whether to load the layout detection module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
use_ocr_model Whether to load the OCR module. If set to None, the default value initialized by the pipeline will be used, initialized to True. bool None
device The device used for inference. Supports specifying a specific card number.
  • CPU: For example, cpu indicates using CPU for inference;
  • GPU: For example, gpu:0 indicates using the first GPU for inference;
  • NPU: For example, npu:0 indicates using the first NPU for inference;
  • XPU: For example, xpu:0 indicates using the first XPU for inference;
  • MLU: For example, mlu:0 indicates using the first MLU for inference;
  • DCU: For example, dcu:0 indicates using the first DCU for inference;
  • None: If set to None, the default value initialized by the pipeline will be used, which prioritizes using the local GPU device 0; if not available, it will use the CPU device.
str None
enable_hpi Whether to enable high-performance inference. bool False
use_tensorrt Whether to use TensorRT for inference acceleration. bool False
min_subgraph_size Minimum subgraph size for optimizing model subgraph computation. int 3
precision Computation precision, such as fp32, fp16. str fp32
enable_mkldnn Whether to enable the MKL-DNN acceleration library. If set to None, it will be enabled by default. bool None
cpu_threads Number of threads to use for inference on the CPU. int 8
paddlex_config Path to PaddleX pipeline configuration file. str None

(2) Call the predict() method of the general table recognition V2 pipeline object to perform inference prediction, which returns a result list.

Additionally, the pipeline also provides the predict_iter() method. Both methods accept the same parameters and return results in the same way; the difference is that predict_iter() returns a generator, allowing for gradual processing and retrieval of prediction results, suitable for handling large datasets or for scenarios where memory savings are desired. You can choose to use either method based on your actual needs.

The parameters and descriptions of the predict() method are as follows:

Parameter Description Type Default Value
input Data to be predicted, supports multiple input types, required.
  • Python Var: For example, image data represented as numpy.ndarray.
  • str: Local path to image files or PDF files: /root/data/img.jpg; as URL links, such as network URLs for image files or PDF files: example; as local directories, the directory must contain images to be predicted, such as local path: /root/data/ (currently, predictions do not support directories that contain PDF files; the PDF file must be specified to the specific file path).
  • List: The elements of the list must be of the above types, such as [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"].
Python Var|str|list
device Same as the parameters during instantiation. str None
use_doc_orientation_classify Whether to use the document orientation classification module during inference. bool None
use_doc_unwarping Whether to use the text image unwarping module during inference. bool None
use_layout_detection Whether to use the layout detection module during inference. bool None
use_ocr_model Whether to use the ocr model during inference. bool None
text_det_limit_side_len Same as the parameters during instantiation. int None
text_det_limit_type Same as the parameters during instantiation. str None
text_det_thresh Same as the parameters during instantiation. float None
text_det_box_thresh Same as the parameters during instantiation. float None
text_det_unclip_ratio Same as the parameters during instantiation. float None
text_rec_score_thresh Same as the parameters during instantiation. float None
use_e2e_wired_table_rec_model Whether to use the wired end-to-end table recognition mode during inference. bool False
use_e2e_wireless_table_rec_model Whether to use the wireless end-to-end table recognition mode during inference. bool False
use_wired_table_cells_trans_to_html Whether to use the wired table cell detection result direct-to-HTML mode during inference. If enabled, it directly constructs the HTML based on the geometric relationships of the wired table cell detection results. bool False
use_wireless_table_cells_trans_to_html Whether to use the wireless table cell detection result direct-to-HTML mode during inference. If enabled, it directly constructs the HTML based on the geometric relationships of the wireless table cell detection results. bool False
use_table_orientation_classify Whether to use the table orientation classification mode during inference. If enabled, it can correct the direction and correctly complete table recognition when the table in the image has 90/180/270-degree rotation. bool True
use_ocr_results_with_table_cells Whether to use the cell-split OCR mode during inference. If enabled, it will split and re-recognize OCR detection results based on the cell prediction results to avoid missing text. bool True

(3) Process the prediction results. The prediction result for each sample is a corresponding Result object, which supports printing, saving as an image, saving as an xlsx file, saving as an HTML file, and saving as a json file:

Method Description Parameter Type Parameter Description Default Value
print() Print results to the terminal format_json bool Whether to format the output content using JSON indentation True
indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False keeps the original characters. Effective only when format_json is True False
save_to_json() Save results as a json format file save_path str The path to save the file. When it is a directory, the saved file will be named the same as the input file type. None
indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False keeps the original characters. Effective only when format_json is True False
save_to_img() Save results as an image format file save_path str The path to save the file, supporting directory or file path None
save_to_xlsx() Save results as an xlsx format file save_path str The path to save the file, supporting directory or file path None
save_to_html() Save results as an html format file save_path str The path to save the file, supporting directory or file path None
  • Calling the print() method will print the results to the terminal. The content printed to the terminal is explained as follows:

    • input_path: (str) The input path of the image to be predicted.

    • page_index: (Union[int, None]) If the input is a PDF file, this indicates which page of the PDF it is; otherwise, it is None.

    • model_settings: (Dict[str, bool]) Configuration parameters required by the pipeline.

      • use_doc_preprocessor: (bool) Controls whether to enable the document preprocessing sub-pipeline.
      • use_layout_detection: (bool) Controls whether to enable the layout area detection sub-pipeline.
      • use_ocr_model: (bool) Controls whether to enable the OCR sub-pipeline.
        • layout_det_res: (Dict[str, Union[List[numpy.ndarray], List[float]]]) Output results of the layout detection sub-module. Only exists when use_layout_detection=True.
      • input_path: (Union[str, None]) The image path accepted by the layout detection area module, saved as None when the input is numpy.ndarray.
      • page_index: (Union[int, None]) If the input is a PDF file, this indicates which page of the PDF it is; otherwise, it is None.
      • boxes: (List[Dict]) List of detection boxes for the layout seal area. Each element in the list contains the following fields:
        • cls_id: (int) The category ID of the detection box.
        • score: (float) The confidence of the detection box.
        • coordinate: (List[float]) The coordinates of the four vertices of the detection box, in the order of x1, y1, x2, y2, indicating the x coordinate and y coordinate of the top left corner and the bottom right corner.
        • doc_preprocessor_res: (Dict[str, Union[str, Dict[str, bool], int]]) Output results of the document preprocessing sub-pipeline. Only exists when use_doc_preprocessor=True.
      • input_path: (Union[str, None]) The image path accepted by the image preprocessing sub-pipeline, saved as None when the input is numpy.ndarray.
      • model_settings: (Dict) Configuration parameters for the preprocessing sub-pipeline.
        • use_doc_orientation_classify: (bool) Controls whether to enable document orientation classification.
        • use_doc_unwarping: (bool) Controls whether to enable text image unwarping.
      • angle: (int) Prediction result of the document orientation classification. When enabled, the values are [0,1,2,3], corresponding to [0°,90°,180°,270°]; when not enabled, it is -1.
    • dt_polys: (List[numpy.ndarray]) List of polygon boxes for text detection. Each detection box is represented by a numpy array consisting of 4 vertex coordinates, with an array shape of (4, 2) and data type of int16.

    • dt_scores: (List[float]) List of confidence scores for the text detection boxes.

    • text_det_params: (Dict[str, Dict[str, int, float]]) Configuration parameters for the text detection module.

      • limit_side_len: (int) Side length limit value during image preprocessing.
      • limit_type: (str) Processing method for side length limits.
      • thresh: (float) Confidence threshold for classifying text pixels.
      • box_thresh: (float) Confidence threshold for text detection boxes.
      • unclip_ratio: (float) Expansion coefficient for text detection boxes.
      • text_type: (str) The type of text detection, currently fixed as "general".
    • text_rec_score_thresh: (float) Filtering threshold for text recognition results.

    • rec_texts: (List[str]) List of text recognition results, including only the text with confidence exceeding text_rec_score_thresh.

    • rec_scores: (List[float]) List of confidence scores for text recognition, filtered by text_rec_score_thresh.

    • rec_polys: (List[numpy.ndarray]) List of text detection boxes that have been filtered by confidence, with the same format as dt_polys.

    • rec_boxes: (numpy.ndarray) Array of rectangular bounding boxes for detection boxes, with a shape of (n, 4) and dtype of int16. Each row represents the coordinates of a rectangular box as [x_min, y_min, x_max, y_max], where (x_min, y_min) is the top left corner coordinate, and (x_max, y_max) is the bottom right corner coordinate.

  • Calling the save_to_json() method will save the above content to the specified save_path. If a directory is specified, the saved path will be save_path/{your_img_basename}_res.json; if a file is specified, it will be saved directly to that file. Since json files do not support saving numpy arrays, the numpy.array types will be converted to list format.

  • Calling the save_to_img() method will save the visualization results to the specified save_path. If a directory is specified, the saved path will be save_path/{your_img_basename}_ocr_res_img.{your_img_extension}; if a file is specified, it will be saved directly to that file. (The pipeline usually contains many result images, so it is not recommended to specify a specific file path directly, as multiple images will be overwritten and only the last image will be retained.)
  • Calling the save_to_html() method will save the above content to the specified save_path. If a directory is specified, the saved path will be save_path/{your_img_basename}_table_1.html; if a file is specified, it will be saved directly to that file. In the general table recognition V2 pipeline, the HTML format of the table in the image will be written to the specified HTML file.
  • Calling the save_to_xlsx() method will save the above content to the specified save_path. If a directory is specified, the saved path will be save_path/{your_img_basename}_res.xlsx; if a file is specified, it will be saved directly to that file. In the general table recognition V2 pipeline, the Excel format of the table in the image will be written to the specified xlsx file.

  • Additionally, it also supports obtaining visualization images and prediction results through attributes, as follows:

Attribute Description
json Get the prediction results in json format
img Get visualization images in dict format
  • The prediction result obtained by the json attribute is of dict type, and the relevant content is consistent with the content saved by calling the save_to_json() method.
  • The prediction result returned by the img attribute is a dictionary type data. The keys are table_res_img, ocr_res_img, layout_res_img, and preprocessed_img, and the corresponding values are four Image.Image objects, in the order of: visualization image of the table recognition result, visualization image of the OCR result, visualization image of the layout area detection result, and visualization image of the image preprocessing. If a certain sub-module is not used, the corresponding result image will not be included in the dictionary.

3. Development Integration/Deployment

If the model can meet your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.

If you need to apply the model directly in your Python project, you can refer to the example code in 2.2 Python Script Integration.

Additionally, PaddleOCR provides two other deployment methods, which are described in detail below:

🚀 High-Performance Inference: In actual production environments, many applications have strict performance criteria (especially response speed) for deployment strategies to ensure efficient system operation and smooth user experience. To address this, PaddleOCR provides high-performance inference capabilities aimed at optimizing model inference and preprocessing to significantly speed up the end-to-end process. For detailed high-performance inference procedures, please refer to High-Performance Inference.

☁️ Service-Oriented Deployment: Service-oriented deployment is a common form of deployment in actual production environments. By encapsulating inference functions as services, clients can access these services via network requests to obtain inference results. For detailed service-oriented deployment procedures, please refer to Serving.

Below is the API reference for basic service-oriented deployment and examples of multilingual service calls:

API Reference

The main operations provided by the service are as follows:

  • HTTP request method is POST.
  • Both request and response bodies are JSON data (JSON objects).
  • When the request is processed successfully, the response status code is 200, and the properties of the response body are as follows:
Name Type Meaning
logId string Request UUID.
errorCode integer Error code. Fixed to 0.
errorMsg string Error description. Fixed to "Success".
result object Operation result.
  • When the request processing is unsuccessful, the properties of the response body are as follows:
Name Type Meaning
logId string Request UUID.
errorCode integer Error code. Same as the response status code.
errorMsg string Error description.

The main operation provided by the service is as follows:

  • infer

Locate and identify tables in the image.

POST /table-recognition

  • The properties of the request body are as follows:
Name Type Meaning Required
file string URL of an accessible image file or PDF file on the server, or the Base64 encoded result of the content of the above types of files. By default, for PDF files with more than 10 pages, only the first 10 pages will be processed.
To lift the page limit, please add the following configuration in the model configuration file:
Serving:
  extra:
    max_num_input_imgs: null
Yes
fileType integer | null File type. 0 represents a PDF file, 1 represents an image file. If this property is not present in the request body, the file type will be inferred from the URL. No
useDocOrientationClassify boolean | null Please refer to the use_doc_orientation_classify parameter description in the predict method of the model object. No
useDocUnwarping boolean | null Please refer to the use_doc_unwarping parameter description in the predict method of the model object. No
useLayoutDetection boolean | null Please refer to the use_layout_detection parameter description in the predict method of the model object. No
useOcrModel boolean | null Please refer to the use_ocr_model parameter description in the predict method of the model object. No
layoutThreshold number | null Please refer to the layout_threshold parameter description in the predict method of the model object. No
layoutNms boolean | null Please refer to the layout_nms parameter description in the predict method of the model object. No
layoutUnclipRatio number | array | null Please refer to the layout_unclip_ratio parameter description in the predict method of the model object. No
layoutMergeBboxesMode string | null Please refer to the layout_merge_bboxes_mode parameter description in the predict method of the model object. No
textDetLimitSideLen integer | null Please refer to the text_det_limit_side_len parameter description in the predict method of the model object. No
textDetLimitType string | null Please refer to the text_det_limit_type parameter description in the predict method of the model object. No
textDetThresh number | null Please refer to the text_det_thresh parameter description in the predict method of the model object. No
textDetBoxThresh number | null Please refer to the text_det_box_thresh parameter description in the predict method of the model object. No
textDetUnclipRatio number | null Please refer to the text_det_unclip_ratio parameter description in the predict method of the model object. No
textRecScoreThresh number | null Please refer to the text_rec_score_thresh parameter description in the predict method of the model object. No
useTableCellsOcrResults boolean Please refer to the use_table_cells_ocr_results parameter description in the predict method of the model object. No
useE2eWiredTableRecModel boolean Please refer to the use_e2e_wired_table_rec_model parameter description in the predict method of the model object. No
useE2eWirelessTableRecModel boolean Please refer to the use_e2e_wireless_table_rec_model parameter description in the predict method of the model object. No
  • When the request is processed successfully, the result in the response body has the following properties:
Name Type Meaning
tableRecResults object Table recognition results. The length of the array is 1 (for image input) or the actual number of processed document pages (for PDF input). For PDF input, each element in the array represents the result of each processed page in the PDF file.
dataInfo object Input data information.

Each element in tableRecResults is an object with the following properties:

Name Type Meaning
prunedResult object A simplified version of the JSON representation of the result generated by the predict method of the model object, where the input_path and page_index fields are removed.
outputImages object | null Refer to the img property description of the model prediction results. The images are in JPEG format and encoded in Base64.
inputImage string | null Input image. The image is in JPEG format and encoded in Base64.
Multilingual Service Call Examples
Python
import base64
import requests

API_URL = "http://localhost:8080/table-recognition"
file_path = "./demo.jpg"

with open(file_path, "rb") as file:
    file_bytes = file.read()
    file_data = base64.b64encode(file_bytes).decode("ascii")

payload = {"file": file_data, "fileType": 1}

response = requests.post(API_URL, json=payload)

assert response.status_code == 200
result = response.json()["result"]
for i, res in enumerate(result["tableRecResults"]):
    print(res["prunedResult"])
    for img_name, img in res["outputImages"].items():
        img_path = f"{img_name}_{i}.jpg"
        with open(img_path, "wb") as f:
            f.write(base64.b64decode(img))
        print(f"Output image saved at {img_path}")


4. Secondary Development

If the default model weights provided by the General Table Recognition v2 model are not satisfactory in terms of accuracy or speed for your scenario, you can try further fine-tuning the existing model using your own domain-specific or application scenario data to improve the recognition performance of the General Table Recognition v2 model in your scenario.

Since the General Table Recognition v2 model consists of several modules, if the model's performance is not as expected, it may be due to any of these modules. You can analyze images with poor recognition performance to determine which module is problematic and refer to the corresponding fine-tuning tutorial links in the following table for model fine-tuning.

Situation Fine-Tuning Module Fine-Tuning Reference Link
Table classification error Table Classification Module Link
Table cell location error Table Cell Detection Module Link
Table structure recognition error Table Structure Recognition Module Link
Failed to detect the area where the table is located Layout Area Detection Module Link
Text detection missed Text Detection Module Link
Incorrect text content Text Recognition Module Link
Overall image rotation/table rotation correction is inaccurate Document Image Orientation Classification Module Link
Image distortion correction is inaccurate Text Image Correction Module Fine-tuning not supported

Comments