Table Structure Recognition Module Tutorial¶
1. Overview¶
Table structure recognition is an important component of table recognition systems, capable of converting non-editable table images into editable table formats (such as HTML). The goal of table structure recognition is to identify the positions of rows, columns, and cells in tables. The performance of this module directly affects the accuracy and efficiency of the entire table recognition system. The table structure recognition module usually outputs HTML code for the table area, which is then passed as input to the tabl recognition pipeline for further processing.
2. Supported Model List¶
Model | Model Download Link | Accuracy (%) | GPU Inference Time (ms) [Normal Mode / High Performance Mode] |
CPU Inference Time (ms) [Normal Mode / High Performance Mode] |
Model Storage Size (M) | Description |
---|---|---|---|---|---|---|
SLANet | Inference Model/Training Model | 59.52 | 103.08 / 103.08 | 197.99 / 197.99 | 6.9 M | SLANet is a table structure recognition model independently developed by Baidu PaddlePaddle Vision Team. By adopting a CPU-friendly lightweight backbone network PP-LCNet, high-low level feature fusion module CSP-PAN, and SLA Head, a feature decoding module aligning structure and position information, this model greatly improves the accuracy and inference speed of table structure recognition. |
SLANet_plus | Inference Model/Training Model | 63.69 | 140.29 / 140.29 | 195.39 / 195.39 | 6.9 M | SLANet_plus is an enhanced version of the table structure recognition model SLANet independently developed by the Baidu PaddlePaddle Vision Team. Compared to SLANet, SLANet_plus has greatly improved the recognition ability for wireless and complex tables, and reduced the model's sensitivity to table positioning accuracy. Even if the table positioning is offset, it can still be accurately recognized. |
SLANeXt_wired | Inference Model/Training Model | 69.65 | -- | -- | 351M | The SLANeXt series is a new generation of table structure recognition models independently developed by the Baidu PaddlePaddle Vision Team. Compared to SLANet and SLANet_plus, SLANeXt focuses on table structure recognition, and trains dedicated weights for wired and wireless tables separately. The recognition ability for all types of tables has been significantly improved, especially for wired tables. |
SLANeXt_wireless | Inference Model/Training Model |
Test Environment Description:
- Performance Test Environment
- Test Dataset: High-difficulty Chinese table recognition dataset.
- Hardware Configuration:
- GPU: NVIDIA Tesla T4
- CPU: Intel Xeon Gold 6271C @ 2.60GHz
- Other Environment: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
- Inference Mode Description
Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
---|---|---|---|
Normal Mode | FP32 precision / No TRT acceleration | FP32 precision / 8 threads | PaddleInference |
High Performance Mode | Optimal combination of prior precision type and acceleration strategy | FP32 precision / 8 threads | Selects the prior optimal backend (Paddle/OpenVINO/TRT, etc.) |
3. Quick Start¶
❗ Before getting started, please install the PaddleOCR wheel package. For details, please refer to the Installation Tutorial.
Quickly experience with a single command:
paddleocr table_structure_recognition -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg
The command line supports more parameter settings. Click to expand for detailed explanations of command line parameters.
Parameter | Description | Parameter Type | Default Value |
---|---|---|---|
input |
Data to be predicted, required.
Local path of image or PDF file, e.g., /root/data/img.jpg ; URL link, e.g., network URL of image or PDF file: Example; Local directory, the directory should contain images to be predicted, e.g., local path: /root/data/ (currently does not support prediction of PDF files in directories; PDF files must be specified with a specific file path).
|
str |
|
save_path |
Specify the path to save the inference results file. If not set, the inference results will not be saved locally. | str |
|
doc_orientation_classify_model_name |
The name of the document orientation classification model. If not set, the default model in pipeline will be used. | str |
|
doc_orientation_classify_model_dir |
The directory path of the document orientation classification model. If not set, the official model will be downloaded. | str |
|
doc_unwarping_model_name |
The name of the text image unwarping model. If not set, the default model in pipeline will be used. | str |
|
doc_unwarping_model_dir |
The directory path of the text image unwarping model. If not set, the official model will be downloaded. | str |
|
layout_detection_model_name |
The name of the layout detection model. If not set, the default model in pipeline will be used. | str |
|
layout_detection_model_dir |
The directory path of the layout detection model. If not set, the official model will be downloaded. | str |
|
seal_text_detection_model_name |
The name of the seal text detection model. If not set, the pipeline's default model will be used. | str |
|
seal_text_detection_model_dir |
The directory path of the seal text detection model. If not set, the official model will be downloaded. | str |
|
text_recognition_model_name |
Name of the text recognition model. If not set, the default pipeline model is used. | str |
|
text_recognition_model_dir |
Directory path of the text recognition model. If not set, the official model is downloaded. | str |
|
text_recognition_batch_size |
Batch size for the text recognition model. If not set, defaults to 1 . |
int |
|
use_doc_orientation_classify |
Whether to load and use document orientation classification. If not set, defaults to pipeline initialization value (True ). |
bool |
|
use_doc_unwarping |
Whether to load and use text image correction. If not set, defaults to pipeline initialization value (True ). |
bool |
|
use_layout_detection |
Whether to load and use the layout detection module. If not set, the parameter will default to the value initialized in the pipeline, which is True . |
bool |
|
layout_threshold |
Threshold for layout detection, used to filter out predictions with low confidence. Such as 0.2, indicates filtering out all bounding boxes with a confidence score less than 0.2. If not set, the default PaddleX official model configuration will be used. | float |
|
layout_nms |
Whether to load and use NMS (Non-Maximum Suppression) post-processing for layout region detection to filter out overlapping boxes. If not set, the default configuration of the official model will be used. | bool |
|
layout_unclip_ratio |
The scaling factor for the side length of the detection boxes in layout region detection. A positive float number, e.g., 1.1, indicating that the center of the bounding box remains unchanged while the width and height are both scaled up by a factor of 1.1. If not set, the default PaddleX official model configuration will be used. | float |
|
layout_merge_bboxes_mode |
The merging mode for the detection boxes output by the model in layout region detection.
|
str |
|
seal_det_limit_side_len |
Image side length limit for seal text detection.
Any integer > 0 . If not set, the default is 736 .
|
int |
|
seal_det_limit_type |
Limit type for image side in seal text detection.
Supports min and max ; min ensures shortest side ≥ det_limit_side_len , max ensures longest side ≤ limit_side_len . If not set, default is min .
|
str |
|
seal_det_thresh |
Pixel threshold. Pixels with scores above this value in the probability map are considered text.
any float > 0 . If not set, default is 0.2 .
|
float |
|
seal_det_box_thresh |
Box threshold. Boxes with average pixel scores above this value are considered text regions.
any float > 0 . If not set, default is 0.6 .
|
float |
|
seal_det_unclip_ratio |
Expansion ratio for seal text detection. Higher value means larger expansion area.
Any float > 0 . If not set, default is 0.5 .
|
float |
|
seal_rec_score_thresh |
Recognition score threshold. Text results above this value will be kept.
Any float > 0 . If not set, default is 0.0 (no threshold).
|
float |
|
device |
The device used for inference. Support for specifying specific card numbers.
|
str |
|
enable_hpi |
Whether to enable high-performance inference. | bool |
False |
use_tensorrt |
Whether to use TensorRT for inference acceleration. | bool |
False |
min_subgraph_size |
The minimum subgraph size, used to optimize the computation of model subgraphs. | int |
3 |
precision |
The computational precision, such as fp32, fp16. | str |
fp32 |
enable_mkldnn |
Whether to enable the MKL-DNN acceleration library. | bool |
True |
cpu_threads |
The number of threads used for inference on the CPU. | int |
8 |
paddlex_config |
Path to PaddleX pipeline configuration file. | str |
After running, the results will be printed to the terminal, as follows:
{'res': {'input_path': './seal_text_det.png', 'model_settings': {'use_doc_preprocessor': True, 'use_layout_detection': True}, 'doc_preprocessor_res': {'input_path': None, 'page_index': None, 'model_settings': {'use_doc_orientation_classify': False, 'use_doc_unwarping': False}, 'angle': -1}, 'layout_det_res': {'input_path': None, 'page_index': None, 'boxes': [{'cls_id': 16, 'label': 'seal', 'score': 0.975529670715332, 'coordinate': [6.191284, 0.16680908, 634.39325, 628.85345]}]}, 'seal_res_list': [{'input_path': None, 'page_index': None, 'model_settings': {'use_doc_preprocessor': False, 'use_textline_orientation': False}, 'dt_polys': [array([[320, 38],
...,
[315, 38]]), array([[461, 347],
...,
[456, 346]]), array([[439, 445],
...,
[434, 444]]), array([[158, 468],
...,
[154, 466]])], 'text_det_params': {'limit_side_len': 736, 'limit_type': 'min', 'thresh': 0.2, 'max_side_limit': 4000, 'box_thresh': 0.6, 'unclip_ratio': 0.5}, 'text_type': 'seal', 'textline_orientation_angles': array([-1, ..., -1]), 'text_rec_score_thresh': 0, 'rec_texts': ['天津君和缘商贸有限公司', '发票专用章', '吗繁物', '5263647368706'], 'rec_scores': array([0.99340463, ..., 0.9916274 ]), 'rec_polys': [array([[320, 38],
...,
[315, 38]]), array([[461, 347],
...,
[456, 346]]), array([[439, 445],
...,
[434, 444]]), array([[158, 468],
...,
[154, 466]])], 'rec_boxes': array([], dtype=float64)}]}}
save_path
, and the visualized result of seal OCR is as follows:
2.2 Python Script Integration¶
- The above command line is for quickly experiencing and viewing the effect. Generally, in a project, you often need to integrate through code. You can complete the quick inference of the pipeline with just a few lines of code. The inference code is as follows:
from paddleocr import TableStructureRecognition
model = TableStructureRecognition(model_name="SLANet")
output = model.predict(input="table_recognition.jpg", batch_size=1)
for res in output:
res.print(json_format=False)
res.save_to_json("./output/res.json")
After running, the result is:
{'res': {'input_path': 'table_recognition.jpg', 'page_index': None, 'bbox': [[42, 2, 390, 2, 388, 27, 40, 26], [11, 35, 89, 35, 87, 63, 11, 63], [113, 34, 192, 34, 186, 64, 109, 64], [219, 33, 399, 33, 393, 62, 212, 62], [413, 33, 544, 33, 544, 64, 407, 64], [12, 67, 98, 68, 96, 93, 12, 93], [115, 66, 205, 66, 200, 91, 111, 91], [234, 65, 390, 65, 385, 92, 227, 92], [414, 66, 537, 67, 537, 95, 409, 95], [7, 97, 106, 97, 104, 128, 7, 128], [113, 96, 206, 95, 201, 127, 109, 127], [236, 96, 386, 96, 381, 128, 230, 128], [413, 96, 534, 95, 533, 127, 408, 127]], 'structure': ['<html>', '<body>', '<table>', '<tr>', '<td', ' colspan="4"', '>', '</td>', '</tr>', '<tr>', '<td></td>', '<td></td>', '<td></td>', '<td></td>', '</tr>', '<tr>', '<td></td>', '<td></td>', '<td></td>', '<td></td>', '</tr>', '<tr>', '<td></td>', '<td></td>', '<td></td>', '<td></td>', '</tr>', '</table>', '</body>', '</html>'], 'structure_score': 0.99948007}}
Parameter meanings are as follows:
input_path
: The path of the input table image to be predictedpage_index
: If the input is a PDF file, indicates the page number of the PDF; otherwise, it isNone
boxes
: Predicted table cell information, a list consisting of the coordinates of predicted table cells. Notably, table cell predictions for the SLANeXt series models are invalidstructure
: Predicted table structure HTML expressions, a list consisting of predicted HTML keywords in orderstructure_score
: Confidence of the predicted table structure
Descriptions of related methods and parameters are as follows:
TableStructureRecognition
instantiates a table structure recognition model (usingSLANet
as an example). Details are as follows:
Parameter | Description | Type | Options | Default |
---|---|---|---|---|
doc_orientation_classify_model_name |
Name of the document orientation classification model. If set to None , the pipeline default model is used. |
str |
All model names | None |
doc_orientation_classify_model_dir |
Directory path of the document orientation classification model. If set to None , the official model will be downloaded. |
str |
None |
|
doc_unwarping_model_name |
Name of the document unwarping model. If set to None , the pipeline default model is used. |
str |
None |
|
doc_unwarping_model_dir |
Directory path of the document unwarping model. If set to None , the official model will be downloaded. |
str |
None |
|
layout_detection_model_name |
Name of the layout detection model. If set to None , the pipeline default model is used. |
str |
None |
|
layout_detection_model_dir |
Directory path of the layout detection model. If set to None , the official model will be downloaded. |
str |
None |
|
seal_text_detection_model_name |
Name of the seal text detection model. If set to None , the default model will be used. |
str |
||
seal_text_detection_model_dir |
Directory of the seal text detection model. If set to None , the official model will be downloaded. |
str |
||
text_recognition_model_name |
Name of the text recognition model. If set to None , the pipeline default model is used. |
str |
None |
|
text_recognition_model_dir |
Directory path of the text recognition model. If set to None , the official model will be downloaded. |
str |
None |
|
text_recognition_batch_size |
Batch size for the text recognition model. If set to None , the default batch size is 1 . |
int |
None |
|
use_doc_orientation_classify |
Whether to enable the document orientation classification module. If set to None , the default value is True . |
bool |
None |
|
use_doc_unwarping |
Whether to enable the document image unwarping module. If set to None , the default value is True . |
bool |
None |
|
use_layout_detection |
Whether to load and use the layout detection module. If set to None , the parameter will default to the value initialized in the pipeline, which is True . |
bool |
None |
|
layout_threshold |
Same as the parameter used during initialization. | float|dict |
None |
|
layout_nms |
Same as the parameter used during initialization. | bool |
None |
|
layout_unclip_ratio |
Same as the parameter used during initialization. | float|Tuple[float,float]|dict |
None |
|
layout_merge_bboxes_mode |
Same as the parameter used during initialization. | str|dict |
None |
|
seal_det_limit_side_len |
Image side length limit for seal text detection.
|
int |
None |
|
seal_det_limit_type |
Limit type for seal text detection image side length.
|
str |
None |
|
seal_det_thresh |
Pixel threshold for detection. Pixels with scores greater than this value in the probability map are considered text pixels.
|
float |
None |
|
seal_det_box_thresh |
Bounding box threshold. If the average score of all pixels inside a detection box exceeds this threshold, it is considered a text region.
|
float |
None |
|
seal_det_unclip_ratio |
Expansion ratio for seal text detection. The larger the value, the larger the expanded area.
|
float |
None |
|
seal_rec_score_thresh |
Score threshold for seal text recognition. Text results with scores above this threshold will be retained.
|
float |
None |
|
device |
Device used for inference. Supports specifying device ID.
|
str |
None |
|
enable_hpi |
Whether to enable high-performance inference. | bool |
False |
|
use_tensorrt |
Whether to use TensorRT for accelerated inference. | bool |
False |
|
min_subgraph_size |
Minimum subgraph size used to optimize model subgraph computation. | int |
3 |
|
precision |
Computation precision, e.g., fp32, fp16. | str |
"fp32" |
|
enable_mkldnn |
Whether to enable MKL-DNN acceleration. | bool |
True |
|
cpu_threads |
Number of threads used for inference on CPU. | int |
8 |
|
paddlex_config |
Path to the PaddleX pipeline configuration file. | str |
None |
-
Among them,
model_name
must be specified. Ifmodel_dir
is specified, the user's custom model is used. -
Call the
predict()
method of the table structure recognition model for inference prediction, which returns a result list. In addition, this module also provides thepredict_iter()
method. The two are completely consistent in parameter acceptance and result return. The difference is thatpredict_iter()
returns agenerator
, which can process and obtain prediction results step by step, suitable for handling large datasets or scenarios where you want to save memory. You can choose to use either method according to your actual needs. Thepredict()
method has parametersinput
andbatch_size
, described as follows:
Parameter | Parameter Description | Parameter Type | Default Value |
---|---|---|---|
input |
Input data to be predicted. Required. Supports multiple types:
|
Python Var|str|list |
|
use_doc_orientation_classify |
Whether to use the document orientation classification module during inference. | bool |
None |
use_doc_unwarping |
Whether to use the text image correction module during inference. | bool |
None |
use_layout_detection |
Whether to use the layout detection module during inference. | bool |
None |
layout_threshold |
Same as the parameter during instantiation. | float|dict |
None |
layout_nms |
Same as the parameter during instantiation. | bool |
None |
layout_unclip_ratio |
Same as the parameter during instantiation. | float|Tuple[float,float]|dict |
None |
layout_merge_bboxes_mode |
Same as the parameter during instantiation. | str|dict |
None |
seal_det_limit_side_len |
Same as the parameter during instantiation. | int |
None |
seal_det_limit_type |
Same as the parameter during instantiation. | str |
None |
seal_det_thresh |
Same as the parameter during instantiation. | float |
None |
seal_det_box_thresh |
Same as the parameter during instantiation. | float |
None |
seal_det_unclip_ratio |
Same as the parameter during instantiation. | float |
None |
seal_rec_score_thresh |
Same as the parameter during instantiation. | float |
None |
- For processing prediction results, the prediction result of each sample is the corresponding Result object, and supports printing and saving as a
json
file:
Method | Description | Parameter | Type | Parameter Description | Default |
---|---|---|---|---|---|
print() |
Print result to terminal | format_json |
bool |
Whether to format the output content using JSON indentation. |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data for better readability, effective only when format_json is True . |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode . When set to True , all non-ASCII characters will be escaped; False will retain the original characters, effective only when format_json is True . |
False |
||
save_to_json() |
Save result as json format file | save_path |
str |
The file path to save the results. When it is a directory, the saved file name will be consistent with the input file type. | None |
indent |
int |
Specify the indentation level to beautify the output JSON data for better readability, effective only when format_json is True . |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode . When set to True , all non-ASCII characters will be escaped; False will retain the original characters, effective only when format_json is True . |
False |
||
save_to_img() |
Save results as an image file | save_path |
str |
The file path to save the results, supports directory or file path. | None |
- In addition, it also supports obtaining results through attributes, as follows:
Attribute | Description |
---|---|
json |
Get the prediction result in json format |
4. Secondary Development¶
If the above models are still not ideal for your scenario, you can try the following steps for secondary development. Here, training SLANet_plus
is used as an example, and for other models, just replace the corresponding configuration file. First, you need to prepare a dataset for table structure recognition, which can be prepared with reference to the format of the table structure recognition demo data. Once ready, you can train and export the model as follows. After exporting, you can quickly integrate the model into the above API. Here, the table structure recognition demo data is used as an example. Before training the model, please make sure you have installed the dependencies required by PaddleOCR according to the installation documentation.
4.1 Dataset and Pretrained Model Preparation¶
4.1.1 Prepare Dataset¶
# Download sample dataset
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/table_rec_dataset_examples.tar
tar -xf table_rec_dataset_examples.tar
4.1.2 Download Pretrained Model¶
# Download SLANet_plus pretrained model
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/SLANet_plus_pretrained.pdparams
4.2 Model Training¶
PaddleOCR is modularized. When training the SLANet_plus
recognition model, you need to use the configuration file of SLANet_plus
.
The training commands are as follows:
# Single card training (default training method)
python3 tools/train.py -c configs/table/SLANet_plus.yml \
-o Global.pretrained_model=./SLANet_plus_pretrained.pdparams
Train.dataset.data_dir=./table_rec_dataset_examples \
Train.dataset.label_file_list='[./table_rec_dataset_examples/train.txt]' \
Eval.dataset.data_dir=./table_rec_dataset_examples \
Eval.dataset.label_file_list='[./table_rec_dataset_examples/val.txt]'
# Multi-card training, specify card numbers via --gpus parameter
python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py \
-c configs/table/SLANet_plus.yml \
-o Global.pretrained_model=./SLANet_plus_pretrained.pdparams
-o Global.pretrained_model=./PP-OCRv5_server_det_pretrained.pdparams \
Train.dataset.data_dir=./table_rec_dataset_examples \
Train.dataset.label_file_list='[./table_rec_dataset_examples/train.txt]' \
Eval.dataset.data_dir=./table_rec_dataset_examples \
Eval.dataset.label_file_list='[./table_rec_dataset_examples/val.txt]'
4.3 Model Evaluation¶
You can evaluate the trained weights, such as output/xxx/xxx.pdparams
, using the following command:
# Note to set the path of pretrained_model to the local path. If you use the model saved by your own training, please modify the path and file name to {path/to/weights}/{model_name}.
# Demo test set evaluation
python3 tools/eval.py -c configs/table/SLANet_plus.yml -o \
Global.pretrained_model=output/xxx/xxx.pdparams
Eval.dataset.data_dir=./table_rec_dataset_examples \
Eval.dataset.label_file_list='[./table_rec_dataset_examples/val.txt]'
4.4 Model Export¶
python3 tools/export_model.py -c configs/table/SLANet_plus.yml -o \
Global.pretrained_model=output/xxx/xxx.pdparams \
Global.save_inference_dir="./SLANet_plus_infer/"
After exporting the model, the static graph model will be stored in ./SLANet_plus_infer/
in the current directory. In this directory, you will see the following files: