Skip to content

Layout Detection Module Tutorial

I. Overview

The core task of structure analysis is to parse and segment the content of input document images. By identifying different elements in the image (such as text, charts, images, etc.), they are classified into predefined categories (e.g., pure text area, title area, table area, image area, list area, etc.), and the position and size of these regions in the document are determined.

II. Supported Model List

  • The layout detection model includes 20 common categories: document title, paragraph title, text, page number, abstract, table, references, footnotes, header, footer, algorithm, formula, formula number, image, table, seal, figure_table title, chart, and sidebar text and lists of references
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (MB) Introduction
PP-DocLayout_plus-L Inference Model/Training Model 83.2 53.03 / 17.23 634.62 / 378.32 126.01 A higher-precision layout area localization model trained on a self-built dataset containing Chinese and English papers, PPT, multi-layout magazines, contracts, books, exams, ancient books and research reports using RT-DETR-L
  • The layout detection model includes 1 category: Block:
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (MB) Introduction
PP-DocBlockLayout Inference Model/Training Model 95.9 34.60 / 28.54 506.43 / 256.83 123.92 A layout block localization model trained on a self-built dataset containing Chinese and English papers, PPT, multi-layout magazines, contracts, books, exams, ancient books and research reports using RT-DETR-L
  • The layout detection model includes 23 common categories: document title, paragraph title, text, page number, abstract, table of contents, references, footnotes, header, footer, algorithm, formula, formula number, image, figure caption, table, table caption, seal, figure title, figure, header image, footer image, and sidebar text
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (MB) Introduction
PP-DocLayout-L Inference Model/Training Model 90.4 33.59 / 33.59 503.01 / 251.08 123.76 A high-precision layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using RT-DETR-L.
PP-DocLayout-M Inference Model/Training Model 75.2 13.03 / 4.72 43.39 / 24.44 22.578 A layout area localization model with balanced precision and efficiency, trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-L.
PP-DocLayout-S Inference Model/Training Model 70.9 11.54 / 3.86 18.53 / 6.29 4.834 A high-efficiency layout area localization model trained on a self-built dataset containing Chinese and English papers, magazines, contracts, books, exams, and research reports using PicoDet-S.

❗ The above list includes the 4 core models that are key supported by the text recognition module. The module actually supports a total of 12 full models, including several predefined models with different categories. The complete model list is as follows:

👉 Details of Model List * Table Layout Detection Model
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (MB) Introduction
PicoDet_layout_1x_table Inference Model/Training Model 97.5 9.57 / 6.63 27.66 / 16.75 7.4 A high-efficiency layout area localization model trained on a self-built dataset using PicoDet-1x, capable of detecting table regions.
* 3-Class Layout Detection Model, including Table, Image, and Stamp
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (MB) Introduction
PicoDet-S_layout_3cls Inference Model/Training Model 88.2 8.43 / 3.44 17.60 / 6.51 4.8 A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S.
PicoDet-L_layout_3cls Inference Model/Training Model 89.0 12.80 / 9.57 45.04 / 23.86 22.6 A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L.
RT-DETR-H_layout_3cls Inference Model/Training Model 95.8 114.80 / 25.65 924.38 / 924.38 470.1 A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H.
* 5-Class English Document Area Detection Model, including Text, Title, Table, Image, and List
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (MB) Introduction
PicoDet_layout_1x Inference Model/Training Model 97.8 9.62 / 6.75 26.96 / 12.77 7.4 A high-efficiency English document layout area localization model trained on the PubLayNet dataset using PicoDet-1x.
* 17-Class Area Detection Model, including 17 common layout categories: Paragraph Title, Image, Text, Number, Abstract, Content, Figure Caption, Formula, Table, Table Caption, References, Document Title, Footnote, Header, Algorithm, Footer, and Stamp
ModelModel Download Link mAP(0.5) (%) GPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
CPU Inference Time (ms)
[Normal Mode / High-Performance Mode]
Model Storage Size (MB) Introduction
PicoDet-S_layout_17cls Inference Model/Training Model 87.4 8.80 / 3.62 17.51 / 6.35 4.8 A high-efficiency layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-S.
PicoDet-L_layout_17cls Inference Model/Training Model 89.0 12.60 / 10.27 43.70 / 24.42 22.6 A balanced efficiency and precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using PicoDet-L.
RT-DETR-H_layout_17cls Inference Model/Training Model 98.3 115.29 / 101.18 964.75 / 964.75 470.2 A high-precision layout area localization model trained on a self-built dataset of Chinese and English papers, magazines, and research reports using RT-DETR-H.
Test Environment Description:
  • Performance Test Environment
    • Test Dataset:
      • 20 types of layout detection models: PaddleOCR's self built layout area detection dataset, including Chinese and English papers, magazines, newspapers, research papers PPT、 1300 images of document types such as test papers and textbooks.
      • Type 1 version face region detection model: PaddleOCR's self built version face region detection dataset, including Chinese and English papers, magazines, newspapers, research reports PPT、 1000 document type images such as test papers and textbooks.
      • 23 categories Layout Detection Model: A self-built layout area detection dataset by PaddleOCR, containing 500 common document type images such as Chinese and English papers, magazines, contracts, books, exam papers, and research reports.
      • Table Layout Detection Model: A self-built table area detection dataset by PaddleOCR, including 7,835 Chinese and English paper document type images with tables.
      • 3-Class Layout Detection Model: A self-built layout area detection dataset by PaddleOCR, comprising 1,154 common document type images such as Chinese and English papers, magazines, and research reports.
      • 5-Class English Document Area Detection Model: The evaluation dataset of PubLayNet, containing 11,245 images of English documents.
      • 17-Class Area Detection Model: A self-built layout area detection dataset by PaddleOCR, including 892 common document type images such as Chinese and English papers, magazines, and research reports.
    • Hardware Configuration:
      • GPU: NVIDIA Tesla T4
      • CPU: Intel Xeon Gold 6271C @ 2.60GHz
    • Software Environment:
      • Ubuntu 20.04 / CUDA 11.8 / cuDNN 8.9 / TensorRT 8.6.1.6
      • paddlepaddle 3.0.0 / paddleocr 3.0.3
  • Inference Mode Description
Mode GPU Configuration CPU Configuration Acceleration Technology Combination
Normal Mode FP32 Precision / No TRT Acceleration FP32 Precision / 8 Threads PaddleInference
High-Performance Mode Optimal combination of pre-selected precision types and acceleration strategies FP32 Precision / 8 Threads Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.)

III. Quick Integration

❗ Before quick integration, please install the PaddleOCR wheel package. For detailed instructions, refer to PaddleOCR Local Installation Tutorial

Quickly experience with just one command:

paddleocr layout_detection -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/layout.jpg

Note: The official models would be download from HuggingFace by default. If can't access to HuggingFace, please set the environment variable PADDLE_PDX_MODEL_SOURCE="BOS" to change the model source to BOS. In the future, more model sources will be supported.

You can also integrate the model inference from the layout area detection module into your project. Before running the following code, please download Example Image Go to the local area.

from paddleocr import LayoutDetection

model = LayoutDetection(model_name="PP-DocLayout_plus-L")
output = model.predict("layout.jpg", batch_size=1, layout_nms=True)
for res in output:
    res.print()
    res.save_to_img(save_path="./output/")
    res.save_to_json(save_path="./output/res.json")

After running, the result obtained is:

{'res': {'input_path': 'layout.jpg', 'page_index': None, 'boxes': [{'cls_id': 2, 'label': 'text', 'score': 0.9870226979255676, 'coordinate': [34.101906, 349.85275, 358.59213, 611.0772]}, {'cls_id': 2, 'label': 'text', 'score': 0.9866003394126892, 'coordinate': [34.500324, 647.1585, 358.29367, 848.66797]}, {'cls_id': 2, 'label': 'text', 'score': 0.9846674203872681, 'coordinate': [385.71445, 497.40973, 711.2261, 697.84265]}, {'cls_id': 8, 'label': 'table', 'score': 0.984126091003418, 'coordinate': [73.76879, 105.94899, 321.95303, 298.84888]}, {'cls_id': 8, 'label': 'table', 'score': 0.9834211468696594, 'coordinate': [436.95642, 105.81531, 662.7168, 313.48462]}, {'cls_id': 2, 'label': 'text', 'score': 0.9832247495651245, 'coordinate': [385.62787, 346.2288, 710.10095, 458.77127]}, {'cls_id': 2, 'label': 'text', 'score': 0.9816061854362488, 'coordinate': [385.7802, 735.1931, 710.56134, 849.9764]}, {'cls_id': 6, 'label': 'figure_title', 'score': 0.9577341079711914, 'coordinate': [34.421448, 20.055151, 358.71283, 76.53663]}, {'cls_id': 6, 'label': 'figure_title', 'score': 0.9505634307861328, 'coordinate': [385.72278, 20.053688, 711.29333, 74.92744]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.9001723527908325, 'coordinate': [386.46344, 477.03488, 699.4023, 490.07474]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.8845751285552979, 'coordinate': [35.413048, 627.73596, 185.58383, 640.52264]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.8837394118309021, 'coordinate': [387.17603, 716.3423, 524.7841, 729.258]}, {'cls_id': 0, 'label': 'paragraph_title', 'score': 0.8508939743041992, 'coordinate': [35.50064, 331.18445, 141.6444, 344.81097]}]}}

The meanings of the parameters are as follows: - input_path: The path to the input image for prediction. - page_index: If the input is a PDF file, it indicates which page of the PDF it is; otherwise, it is None. - boxes: Information about the predicted bounding boxes, a list of dictionaries. Each dictionary represents a detected object and contains the following information: - cls_id: Class ID, an integer. - label: Class label, a string. - score: Confidence score of the bounding box, a float. - coordinate: Coordinates of the bounding box, a list of floats in the format [xmin, ymin, xmax, ymax].

The visualized image is as follows:

Relevant methods, parameters, and explanations are as follows:

  • LayoutDetection instantiates a target detection model (here, PP-DocLayout_plus-L is used as an example). The detailed explanation is as follows:
Parameter Description Type Default
model_name Model name. If set to None, PP-DocLayout-L will be used. str|None None
model_dir Model storage path. str|None None
device Device for inference.
For example: "cpu", "gpu", "npu", "gpu:0", "gpu:0,1".
If multiple devices are specified, parallel inference will be performed.
By default, GPU 0 is used if available; otherwise, CPU is used.
str|None None
enable_hpi Whether to enable high-performance inference. bool False
use_tensorrt Whether to use the Paddle Inference TensorRT subgraph engine. If the model does not support acceleration through TensorRT, setting this flag will not enable acceleration.
For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6.
For Paddle with CUDA version 12.6, the compatible TensorRT version is 10.x (x>=5), and it is recommended to install TensorRT 10.5.0.18.
bool False
precision Computation precision when using the TensorRT subgraph engine in Paddle Inference.
Options: "fp32", "fp16".
str "fp32"
enable_mkldnn Whether to enable MKL-DNN acceleration for inference. If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set. bool True
mkldnn_cache_capacity MKL-DNN cache capacity. int 10
cpu_threads Number of threads to use for inference on CPUs. int 10
img_size Input image size.
  • int: e.g. 640, resizes input image to 640x640.
  • list: e.g. [640, 512], resizes input image to width 640 and height 512.
int|list|None None
threshold Threshold for filtering low-confidence predictions.
  • float: e.g. 0.2, filters out all boxes with confidence below 0.2.
  • dict: The key is int (class id), the value is float (threshold). For example, {0: 0.45, 2: 0.48, 7: 0.4} means class 0 uses threshold 0.45, class 2 uses 0.48, class 7 uses 0.4.
  • None: uses the model's default configuration.
float|dict|None None
layout_nms Whether to use NMS post-processing to filter overlapping boxes.
  • bool: whether to use NMS for post-processing to filter overlapping boxes.
  • None: uses the model's default configuration.
bool|None None
layout_unclip_ratio Scaling factor for the side length of the detection box.
  • float: A float greater than 0, e.g. 1.1, expands width and height by 1.1 times.
  • list: e.g. [1.2, 1.5], expands width by 1.2x and height by 1.5x.
  • dict: The key is int (class id), the value is tuple of two floats (width ratio, height ratio). For example, {0: (1.1, 2.0)} means for class 0, width is expanded by 1.1x and height by 2.0x.
  • None: uses the model's default configuration.
float|list|dict|None None
layout_merge_bboxes_mode Merge mode for model output bounding boxes.
  • "large": Only keep the largest outer box among overlapping boxes, remove inner boxes.
  • "small": Only keep the smallest inner box among overlapping boxes, remove outer boxes.
  • "union": Keep all boxes, no filtering.
  • dict: The key is int (class id), the value is str (mode). For example, {0: "large", 2: "small"} means class 0 uses "large" mode, class 2 uses "small" mode.
  • None: Use the model's default configuration.
str|dict|None None
  • The predict() method of the target detection model is called for inference prediction. The parameters of the predict() method are input, batch_size, and threshold, which are explained as follows:
Parameter Description Type Default
input Input data to be predicted. Required. Supports multiple input types:
  • Python Var: e.g., numpy.ndarray representing image data
  • str: - Local image or PDF file path: /root/data/img.jpg; - URL of image or PDF file: e.g., example; - Local directory: directory containing images for prediction, e.g., /root/data/ (Note: directories containing PDF files are not supported; PDFs must be specified by exact file path)
  • list: Elements must be of the above types, e.g., [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"]
Python Var|str|list
batch_size Batch size, positive integer. int 1
threshold Same meaning as the instantiation parameters. If set to None, the instantiation value is used; otherwise, this parameter takes precedence. float|dict|None None
layout_nms Same meaning as the instantiation parameters. If set to None, the instantiation value is used; otherwise, this parameter takes precedence. bool|None None
layout_unclip_ratio Same meaning as the instantiation parameters. If set to None, the instantiation value is used; otherwise, this parameter takes precedence. float|list|dict|None None
layout_merge_bboxes_mode Same meaning as the instantiation parameters. If set to None, the instantiation value is used; otherwise, this parameter takes precedence. str|dict|None None
  • Process the prediction results, with each sample's prediction result being the corresponding Result object, and supporting operations such as printing, saving as an image, and saving as a 'json' file:
Method Method Description Parameters Parameter type Parameter Description Default value
print() Print the result to the terminal format_json bool Do you want to use JSON indentation formatting for the output content True
indent int Specify the indentation level to enhance the readability of the JSON data output, only valid when format_json is True 4
ensure_ascii bool Control whether to escape non ASCII characters to Unicode characters. When set to True, all non ASCII characters will be escaped; False preserves the original characters and is only valid when format_json is True False
save_to_json() Save the result as a JSON format file save_path str The saved file path, when it is a directory, the name of the saved file is consistent with the name of the input file type None
indent int Specify the indentation level to enhance the readability of the JSON data output, only valid when format_json is True 4
ensure_ascii bool Control whether to escape non ASCII characters to Unicode characters. When set to True, all non ASCII characters will be escaped; False preserves the original characters and is only valid whenformat_json is True False
save_to_img() Save the results as an image format file save_path str The saved file path, when it is a directory, the name of the saved file is consistent with the name of the input file type None
  • Additionally, it also supports obtaining the visualized image with results and the prediction results via attributes, as follows:
Attribute Description
json Get the prediction result in json format
img Get the visualized image in dict format

IV. Custom Development

Since PaddleOCR does not directly provide training for the layout detection module, if you need to train the layout area detection model, you can refer to PaddleX Layout Detection Module Secondary DevelopmentPartially conduct training. The trained model can be seamlessly integrated into PaddleOCR's API for inference.

V. FAQ

Comments