Skip to content

PP-Structure Quick Start

1. Environment Preparation

1.1 Install PaddlePaddle

If you do not have a Python environment, please refer to Environment Preparation.

  • PaddlePaddle with CUDA 11.8
python3 -m pip install paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/stable/cu118/
  • PaddlePaddle with CUDA 12.3
python3 -m pip install paddlepaddle-gpu -i https://www.paddlepaddle.org.cn/packages/stable/cu123/
  • If your machine does not have an available GPU, please run the following command to install the CPU version
python3 -m pip install paddlepaddle -i https://www.paddlepaddle.org.cn/packages/stable/cpu/

For more software version requirements, please refer to the instructions in the Installation Document.

1.2 Install PaddleOCR Whl Package

1
2
3
4
python3 -m pip install paddleocr

# Install the image direction classification dependency package paddleclas (if you do not use the image direction classification, you can skip it)
python3 -m pip install paddleclas

2. Quick Use

2.1 Use by command line

2.1.1 image orientation + layout analysis + table recognition

1
2
3
# Temporarily disable the new IR feature
export FLAGS_enable_pir_api=0
paddleocr --image_dir=ppstructure/docs/table/1.png --type=structure --image_orientation=true

2.1.2 layout analysis + table recognition

paddleocr --image_dir=ppstructure/docs/table/1.png --type=structure

2.1.3 layout analysis

paddleocr --image_dir=ppstructure/docs/table/1.png --type=structure --table=false --ocr=false

2.1.4 table recognition

paddleocr --image_dir=ppstructure/docs/table/table.jpg --type=structure --layout=false

2.1.5 Key Information Extraction

Key information extraction does not currently support use by the whl package. For detailed usage tutorials, please refer to: Key Information Extraction.

2.1.6 layout recovery

Two layout recovery methods are provided, For detailed usage tutorials, please refer to: Layout Recovery.

  • PDF parse
  • OCR

Recovery by using PDF parse (only support pdf as input):

paddleocr --image_dir=ppstructure/docs/recovery/UnrealText.pdf --type=structure --recovery=true --use_pdf2docx_api=true

Recovery by using OCR:

paddleocr --image_dir=ppstructure/docs/table/1.png --type=structure --recovery=true --lang='en'

2.1.7 layout recovery(PDF to Markdown)

Do not use LaTeXCOR model for formula recognition:

paddleocr --image_dir=ppstructure/docs/recovery/UnrealText.pdf --type=structure --recovery=true --recovery_to_markdown=true --lang='en'

Use LaTeXCOR model for formula recognition, where Chinese layout model must be used:

paddleocr --image_dir=ppstructure/docs/recovery/UnrealText.pdf --type=structure --recovery=true --formula=true --recovery_to_markdown=true --lang='ch'

2.2 Use by python script

2.2.1 image orientation + layout analysis + table recognition

import os
import cv2
from paddleocr import PPStructure,draw_structure_result,save_structure_res

table_engine = PPStructure(show_log=True, image_orientation=True)

save_folder = './output'
img_path = 'ppstructure/docs/table/1.png'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder,os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)

from PIL import Image

font_path = 'doc/fonts/simfang.ttf' # PaddleOCR下提供字体包
image = Image.open(img_path).convert('RGB')
im_show = draw_structure_result(image, result,font_path=font_path)
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')

2.2.2 layout analysis + table recognition

import os
import cv2
from paddleocr import PPStructure,draw_structure_result,save_structure_res

table_engine = PPStructure(show_log=True)

save_folder = './output'
img_path = 'ppstructure/docs/table/1.png'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder,os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)

from PIL import Image

font_path = 'doc/fonts/simfang.ttf' # font provided in PaddleOCR
image = Image.open(img_path).convert('RGB')
im_show = draw_structure_result(image, result,font_path=font_path)
im_show = Image.fromarray(im_show)
im_show.save('result.jpg')

2.2.3 layout analysis

import os
import cv2
from paddleocr import PPStructure,save_structure_res

table_engine = PPStructure(table=False, ocr=False, show_log=True)

save_folder = './output'
img_path = 'ppstructure/docs/table/1.png'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)
import os
import cv2
from paddleocr import PPStructure,save_structure_res

ocr_engine = PPStructure(table=False, ocr=True, show_log=True)

save_folder = './output'
img_path = 'ppstructure/docs/recovery/UnrealText.pdf'
result = ocr_engine(img_path)
for index, res in enumerate(result):
    save_structure_res(res, save_folder, os.path.basename(img_path).split('.')[0], index)

for res in result:
    for line in res:
        line.pop('img')
        print(line)
import os
import cv2
import numpy as np
from paddleocr import PPStructure,save_structure_res
from paddle.utils import try_import
from PIL import Image

ocr_engine = PPStructure(table=False, ocr=True, show_log=True)

save_folder = './output'
img_path = 'ppstructure/docs/recovery/UnrealText.pdf'

fitz = try_import("fitz")
imgs = []
with fitz.open(img_path) as pdf:
    for pg in range(0, pdf.page_count):
        page = pdf[pg]
        mat = fitz.Matrix(2, 2)
        pm = page.get_pixmap(matrix=mat, alpha=False)

        # if width or height > 2000 pixels, don't enlarge the image
        if pm.width > 2000 or pm.height > 2000:
            pm = page.get_pixmap(matrix=fitz.Matrix(1, 1), alpha=False)

        img = Image.frombytes("RGB", [pm.width, pm.height], pm.samples)
        img = cv2.cvtColor(np.array(img), cv2.COLOR_RGB2BGR)
        imgs.append(img)

for index, img in enumerate(imgs):
    result = ocr_engine(img)
    save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0], index)
    for line in result:
        line.pop('img')
        print(line)

2.2.4 table recognition

import os
import cv2
from paddleocr import PPStructure,save_structure_res

table_engine = PPStructure(layout=False, show_log=True)

save_folder = './output'
img_path = 'ppstructure/docs/table/table.jpg'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)

2.2.5 Key Information Extraction

Key information extraction does not currently support use by the whl package. For detailed usage tutorials, please refer to: Inference.

2.2.6 layout recovery

import os
import cv2
from paddleocr import PPStructure,save_structure_res
from paddleocr.ppstructure.recovery.recovery_to_doc import sorted_layout_boxes, convert_info_docx

# Chinese image
table_engine = PPStructure(recovery=True)
# English image
# table_engine = PPStructure(recovery=True, lang='en')

save_folder = './output'
img_path = 'ppstructure/docs/table/1.png'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)

h, w, _ = img.shape
res = sorted_layout_boxes(result, w)
convert_info_docx(img, res, save_folder, os.path.basename(img_path).split('.')[0])

2.2.7 layout recovery(PDF to Markdown)

import os
import cv2
from paddleocr import PPStructure,save_structure_res
from paddleocr.ppstructure.recovery.recovery_to_doc import sorted_layout_boxes
from paddleocr.ppstructure.recovery.recovery_to_markdown import convert_info_markdown

# Chinese image
table_engine = PPStructure(recovery=True)
# English image
# table_engine = PPStructure(recovery=True, lang='en')

save_folder = './output'
img_path = 'ppstructure/docs/table/1.png'
img = cv2.imread(img_path)
result = table_engine(img)
save_structure_res(result, save_folder, os.path.basename(img_path).split('.')[0])

for line in result:
    line.pop('img')
    print(line)

h, w, _ = img.shape
res = sorted_layout_boxes(result, w)
convert_info_markdown(res, save_folder, os.path.basename(img_path).split('.')[0])

2.3 Result description

The return of PP-Structure is a list of dicts, the example is as follows:

2.3.1 layout analysis + table recognition

1
2
3
4
5
6
7
[
  {   'type': 'Text',
      'bbox': [34, 432, 345, 462],
      'res': ([[36.0, 437.0, 341.0, 437.0, 341.0, 446.0, 36.0, 447.0], [41.0, 454.0, 125.0, 453.0, 125.0, 459.0, 41.0, 460.0]],
                [('Tigure-6. The performance of CNN and IPT models using difforen', 0.90060663), ('Tent  ', 0.465441)])
  }
]

Each field in dict is described as follows:

field description
type Type of image area.
bbox The coordinates of the image area in the original image, respectively [upper left corner x, upper left corner y, lower right corner x, lower right corner y].
res OCR or table recognition result of the image area.
table: a dict with field descriptions as follows:
        html: html str of table.
        In the code usage mode, set return_ocr_result_in_table=True whrn call can get the detection and recognition results of each text in the table area, corresponding to the following fields:
        boxes: text detection boxes.
        rec_res: text recognition results.
OCR: A tuple containing the detection boxes and recognition results of each single text.

After the recognition is completed, each image will have a directory with the same name under the directory specified by the output field. Each table in the image will be stored as an excel, and the picture area will be cropped and saved. The filename of excel and picture is their coordinates in the image.

1
2
3
4
5
/output/table/1/
  └─ res.txt
  └─ [454, 360, 824, 658].xlsx        table recognition result
  └─ [16, 2, 828, 305].jpg            picture in Image
  └─ [17, 361, 404, 711].xlsx        table recognition result

2.3.2 Key Information Extraction

Please refer to: Key Information Extraction .

2.4 Parameter Description

'
field description default
output result save path ./output/table
table_max_len long side of the image resize in table structure model 488
table_model_dir Table structure model inference model path None
table_char_dict_path The dictionary path of table structure model ../ppocr/utils/dict/table_structure_dict.txt
merge_no_span_structure In the table recognition model, whether to merge '\' and '\ False
formula_model_dir Formula recognition model inference model path None
formula_char_dict_path The dictionary path of formula recognition model ../ppocr/utils/dict/latex_ocr_tokenizer.json
layout_model_dir Layout analysis model inference model path None
layout_dict_path The dictionary path of layout analysis model ../ppocr/utils/dict/layout_publaynet_dict.txt
layout_score_threshold The box threshold path of layout analysis model 0.5
layout_nms_threshold The nms threshold path of layout analysis model 0.5
kie_algorithm kie model algorithm LayoutXLM
ser_model_dir Ser model inference model path None
ser_dict_path The dictionary path of Ser model ../train_data/XFUND/class_list_xfun.txt
mode structure or kie structure
image_orientation Whether to perform image orientation classification in forward False
layout Whether to perform layout analysis in forward True
table Whether to perform table recognition in forward True
formula Whether to perform formula recognition in forward False
ocr Whether to perform ocr for non-table areas in layout analysis. When layout is False, it will be automatically set to False True
recovery Whether to perform layout recovery in forward False
recovery_to_markdown Whether to convert the layout recovery results into a markdown file False
save_pdf Whether to convert docx to pdf when recovery False
structure_version Structure version, optional PP-structure and PP-structurev2 PP-structure

Most of the parameters are consistent with the PaddleOCR whl package, see whl package documentation

3. Summary

Through the content in this section, you can master the use of PP-Structure related functions through PaddleOCR whl package. Please refer to documentation tutorial for more detailed usage tutorials including model training, inference and deployment, etc.

Comments