Skip to content

Table Structure Recognition Module Tutorial

1. Overview

Table structure recognition is an important component of table recognition systems, capable of converting non-editable table images into editable table formats (such as HTML). The goal of table structure recognition is to identify the positions of rows, columns, and cells in tables. The performance of this module directly affects the accuracy and efficiency of the entire table recognition system. The table structure recognition module usually outputs HTML code for the table area, which is then passed as input to the tabl recognition pipeline for further processing.

2. Supported Model List

ModelModel Download Link Accuracy (%) GPU Inference Time (ms)
[Normal Mode / High Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High Performance Mode]
Model Storage Size (M) Description
SLANetInference Model/Training Model 59.52 103.08 / 103.08 197.99 / 197.99 6.9 M SLANet is a table structure recognition model independently developed by Baidu PaddlePaddle Vision Team. By adopting a CPU-friendly lightweight backbone network PP-LCNet, high-low level feature fusion module CSP-PAN, and SLA Head, a feature decoding module aligning structure and position information, this model greatly improves the accuracy and inference speed of table structure recognition.
SLANet_plusInference Model/Training Model 63.69 140.29 / 140.29 195.39 / 195.39 6.9 M SLANet_plus is an enhanced version of the table structure recognition model SLANet independently developed by the Baidu PaddlePaddle Vision Team. Compared to SLANet, SLANet_plus has greatly improved the recognition ability for wireless and complex tables, and reduced the model's sensitivity to table positioning accuracy. Even if the table positioning is offset, it can still be accurately recognized.
SLANeXt_wired Inference Model/Training Model 69.65 -- -- 351M The SLANeXt series is a new generation of table structure recognition models independently developed by the Baidu PaddlePaddle Vision Team. Compared to SLANet and SLANet_plus, SLANeXt focuses on table structure recognition, and trains dedicated weights for wired and wireless tables separately. The recognition ability for all types of tables has been significantly improved, especially for wired tables.
SLANeXt_wireless Inference Model/Training Model

Test Environment Description:

  • Performance Test Environment
    • Test Dataset: High-difficulty Chinese table recognition dataset.
    • Hardware Configuration:
      • GPU: NVIDIA Tesla T4
      • CPU: Intel Xeon Gold 6271C @ 2.60GHz
      • Other Environment: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
  • Inference Mode Description
Mode GPU Configuration CPU Configuration Acceleration Technology Combination
Normal Mode FP32 precision / No TRT acceleration FP32 precision / 8 threads PaddleInference
High Performance Mode Optimal combination of prior precision type and acceleration strategy FP32 precision / 8 threads Selects the prior optimal backend (Paddle/OpenVINO/TRT, etc.)

3. Quick Start

❗ Before getting started, please install the PaddleOCR wheel package. For details, please refer to the Installation Tutorial.

Quickly experience with a single command:

paddleocr table_structure_recognition -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/table_recognition.jpg

You can also integrate the model inference of the table structure recognition module into your own project. Before running the code below, please download the sample image to your local machine.

from paddleocr import TableStructureRecognition
model = TableStructureRecognition(model_name="SLANet")
output = model.predict(input="table_recognition.jpg", batch_size=1)
for res in output:
    res.print(json_format=False)
    res.save_to_json("./output/res.json")

After running, the result is:

{'res': {'input_path': 'table_recognition.jpg', 'page_index': None, 'bbox': [[42, 2, 390, 2, 388, 27, 40, 26], [11, 35, 89, 35, 87, 63, 11, 63], [113, 34, 192, 34, 186, 64, 109, 64], [219, 33, 399, 33, 393, 62, 212, 62], [413, 33, 544, 33, 544, 64, 407, 64], [12, 67, 98, 68, 96, 93, 12, 93], [115, 66, 205, 66, 200, 91, 111, 91], [234, 65, 390, 65, 385, 92, 227, 92], [414, 66, 537, 67, 537, 95, 409, 95], [7, 97, 106, 97, 104, 128, 7, 128], [113, 96, 206, 95, 201, 127, 109, 127], [236, 96, 386, 96, 381, 128, 230, 128], [413, 96, 534, 95, 533, 127, 408, 127]], 'structure': ['<html>', '<body>', '<table>', '<tr>', '<td', ' colspan="4"', '>', '</td>', '</tr>', '<tr>', '<td></td>', '<td></td>', '<td></td>', '<td></td>', '</tr>', '<tr>', '<td></td>', '<td></td>', '<td></td>', '<td></td>', '</tr>', '<tr>', '<td></td>', '<td></td>', '<td></td>', '<td></td>', '</tr>', '</table>', '</body>', '</html>'], 'structure_score': 0.99948007}}

Parameter meanings are as follows:

  • input_path: The path of the input table image to be predicted
  • page_index: If the input is a PDF file, indicates the page number of the PDF; otherwise, it is None
  • boxes: Predicted table cell information, a list consisting of the coordinates of predicted table cells. Notably, table cell predictions for the SLANeXt series models are invalid
  • structure: Predicted table structure HTML expressions, a list consisting of predicted HTML keywords in order
  • structure_score: Confidence of the predicted table structure

Descriptions of related methods and parameters are as follows:

  • TableStructureRecognition instantiates a table structure recognition model (using SLANet as an example). Details are as follows:
Parameter Description Type Default
model_name Name of the model str None
model_dir Model storage path str None
device Device(s) to use for inference.
Examples: cpu, gpu, npu, gpu:0, gpu:0,1.
If multiple devices are specified, inference will be performed in parallel. Note that parallel inference is not always supported.
By default, GPU 0 will be used if available; otherwise, the CPU will be used.
str None
enable_hpi EWhether to use the high performance inference. bool False
use_tensorrt Whether to use the Paddle Inference TensorRT subgraph engine. bool False
min_subgraph_size Minimum subgraph size for TensorRT when using the Paddle Inference TensorRT subgraph engine. int 3
precision Precision for TensorRT when using the Paddle Inference TensorRT subgraph engine.
Options: fp32, fp16, etc.
str fp32
enable_mkldnn Whether to use MKL-DNN acceleration for inference. bool True
cpu_threads Number of threads to use for inference on CPUs. int 10
  • Among them, model_name must be specified. If model_dir is specified, the user's custom model is used.

  • Call the predict() method of the table structure recognition model for inference prediction, which returns a result list. In addition, this module also provides the predict_iter() method. The two are completely consistent in parameter acceptance and result return. The difference is that predict_iter() returns a generator, which can process and obtain prediction results step by step, suitable for handling large datasets or scenarios where you want to save memory. You can choose to use either method according to your actual needs. The predict() method has parameters input and batch_size, described as follows:

Parameter Description Type Default
input Input data to be predicted. Required. Supports multiple input types:
  • Python Var: e.g., numpy.ndarray representing image data
  • str: - Local image or PDF file path: /root/data/img.jpg; - URL of image or PDF file: e.g., example; - Local directory: directory containing images for prediction, e.g., /root/data/ (Note: directories containing PDF files are not supported; PDFs must be specified by exact file path)
  • List: Elements must be of the above types, e.g., [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"]
Python Var|str|list
batch_size Batch size, positive integer. int 1
  • For processing prediction results, the prediction result of each sample is the corresponding Result object, and supports printing and saving as a json file:
Method Description Parameter Type Parameter Description Default
print() Print result to terminal format_json bool Whether to use JSON indentation formatting for the output True
indent int Specify indentation level to beautify the output JSON data, making it more readable, effective only when format_json is True 4
ensure_ascii bool Controls whether to escape non-ASCII characters as Unicode. When set to True, all non-ASCII characters will be escaped; False keeps the original characters. Effective only when format_json is True False
save_to_json() Save result as json format file save_path str Path to save the file. If it's a directory, the saved file will be named the same as the input file type None
indent int Specify indentation level to beautify the output JSON data, making it more readable, effective only when format_json is True 4
ensure_ascii bool Controls whether to escape non-ASCII characters as Unicode. When set to True, all non-ASCII characters will be escaped; False keeps the original characters. Effective only when format_json is True False
  • In addition, it also supports obtaining results through attributes, as follows:
Attribute Description
json Get the prediction result in json format

4. Secondary Development

If the above models are still not ideal for your scenario, you can try the following steps for secondary development. Here, training SLANet is used as an example, and for other models, just replace the corresponding configuration file. First, you need to prepare a dataset for table structure recognition, which can be prepared with reference to the format of the table structure recognition demo data. Once ready, you can train and export the model as follows. After exporting, you can quickly integrate the model into the above API. Here, the table structure recognition demo data is used as an example. Before training the model, please make sure you have installed the dependencies required by PaddleOCR according to the installation documentation.

4.1 Dataset and Pretrained Model Preparation

4.1.1 Prepare Dataset

# Download sample dataset
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/table_rec_dataset_examples.tar
tar -xf table_rec_dataset_examples.tar

4.1.2 Download Pretrained Model

# Download SLANet pretrained model
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/SLANet_pretrained.pdparams

4.2 Model Training

PaddleOCR is modularized. When training the SLANet recognition model, you need to use the configuration file of SLANet.

The training commands are as follows:

# Single card training (default training method)
python3 tools/train.py -c configs/table/SLANet.yml \
   -o Global.pretrained_model=./SLANet_pretrained.pdparams
# Multi-card training, specify card numbers via --gpus parameter
python3 -m paddle.distributed.launch --gpus '0,1,2,3'  tools/train.py -c configs/table/SLANet.yml \
        -o Global.pretrained_model=./SLANet_pretrained.pdparams

4.3 Model Evaluation

You can evaluate the trained weights, such as output/xxx/xxx.pdparams, using the following command:

# Note to set the path of pretrained_model to the local path. If you use the model saved by your own training, please modify the path and file name to {path/to/weights}/{model_name}.
 # Demo test set evaluation
 python3 tools/eval.py -c configs/table/SLANet.yml -o \
 Global.pretrained_model=output/xxx/xxx.pdparams

4.4 Model Export

 python3 tools/export_model.py -c configs/table/SLANet.yml -o \
 Global.pretrained_model=output/xxx/xxx.pdparams \
 save_inference_dir="./SLANet_infer/"

After exporting the model, the static graph model will be stored in ./SLANet_infer/ in the current directory. In this directory, you will see the following files:

./SLANet_infer/
├── inference.json
├── inference.pdiparams
├── inference.yml
At this point, secondary development is complete, and this static graph model can be directly integrated into the PaddleOCR API.

5. FAQ

Comments