Introduction to PP-StructureV3¶
PP-StructureV3 pipeline, based on the Layout Parsing v1 pipeline, has strengthened the ability of layout detection, table recognition, and formula recognition. It has also added the ability to understand charts and restore reading order, as well as the ability to convert results into Markdown files. In various document data, it performs excellently and can handle more complex document data. This pipeline also provides flexible service-oriented deployment methods, supporting the use of multiple programming languages on various hardware. Moreover, it also provides the ability for secondary development. You can train and optimize on your own dataset based on this pipeline, and the trained model can be seamlessly integrated.

Key Metrics¶
Method Type | Methods | OverallEdit↓ | TextEdit↓ | FormulaEdit↓ | TableEdit↓ | Read OrderEdit↓ | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
EN | ZH | EN | ZH | EN | ZH | EN | ZH | EN | ZH | ||
Pipeline Tools | PP-structureV3 | 0.147 | 0.212 | 0.059 | 0.09 | 0.295 | 0.535 | 0.159 | 0.109 | 0.075 | 0.114 |
MinerU-0.9.3 | 0.15 | 0.357 | 0.061 | 0.215 | 0.278 | 0.577 | 0.18 | 0.344 | 0.079 | 0.292 | |
MinerU-1.3.11 | 0.166 | 0.310 | 0.0826 | 0.2000 | 0.3368 | 0.6236 | 0.1613 | 0.1833 | 0.0834 | 0.2316 | |
Marker-1.2.3 | 0.336 | 0.556 | 0.08 | 0.315 | 0.53 | 0.883 | 0.619 | 0.685 | 0.114 | 0.34 | |
Mathpix | 0.191 | 0.365 | 0.105 | 0.384 | 0.306 | 0.454 | 0.243 | 0.32 | 0.108 | 0.304 | |
Docling-2.14.0 | 0.589 | 0.909 | 0.416 | 0.987 | 0.999 | 1 | 0.627 | 0.81 | 0.313 | 0.837 | |
Pix2Text-1.1.2.3 | 0.32 | 0.528 | 0.138 | 0.356 | 0.276 | 0.611 | 0.584 | 0.645 | 0.281 | 0.499 | |
Unstructured-0.17.2 | 0.586 | 0.716 | 0.198 | 0.481 | 0.999 | 1 | 1 | 0.998 | 0.145 | 0.387 | |
OpenParse-0.7.0 | 0.646 | 0.814 | 0.681 | 0.974 | 0.996 | 1 | 0.284 | 0.639 | 0.595 | 0.641 | |
Expert VLMs | GOT-OCR | 0.287 | 0.411 | 0.189 | 0.315 | 0.36 | 0.528 | 0.459 | 0.52 | 0.141 | 0.28 |
Nougat | 0.452 | 0.973 | 0.365 | 0.998 | 0.488 | 0.941 | 0.572 | 1 | 0.382 | 0.954 | |
Mistral OCR | 0.268 | 0.439 | 0.072 | 0.325 | 0.318 | 0.495 | 0.6 | 0.65 | 0.083 | 0.284 | |
OLMOCR-sglang | 0.326 | 0.469 | 0.097 | 0.293 | 0.455 | 0.655 | 0.608 | 0.652 | 0.145 | 0.277 | |
SmolDocling-256M_transformer | 0.493 | 0.816 | 0.262 | 0.838 | 0.753 | 0.997 | 0.729 | 0.907 | 0.227 | 0.522 | |
General VLMs | Gemini2.0-flash | 0.191 | 0.264 | 0.091 | 0.139 | 0.389 | 0.584 | 0.193 | 0.206 | 0.092 | 0.128 |
Gemini2.5-Pro | 0.148 | 0.212 | 0.055 | 0.168 | 0.356 | 0.439 | 0.13 | 0.119 | 0.049 | 0.121 | |
GPT4o | 0.233 | 0.399 | 0.144 | 0.409 | 0.425 | 0.606 | 0.234 | 0.329 | 0.128 | 0.251 | |
Qwen2-VL-72B | 0.252 | 0.327 | 0.096 | 0.218 | 0.404 | 0.487 | 0.387 | 0.408 | 0.119 | 0.193 | |
Qwen2.5-VL-72B | 0.214 | 0.261 | 0.092 | 0.18 | 0.315 | 0.434 | 0.341 | 0.262 | 0.106 | 0.168 | |
InternVL2-76B | 0.44 | 0.443 | 0.353 | 0.29 | 0.543 | 0.701 | 0.547 | 0.555 | 0.317 | 0.228 |
The above data is from: * OmniDocBench * OmniDocBench: Benchmarking Diverse PDF Document Parsing with Comprehensive Annotations
End to End Benchmark¶
Requirements¶
- Paddle 3.0
- PaddleOCR 3.0.0
- MinerU 1.3.10
- CUDA 11.8
- cuDNN 8.9
Data¶
- Local inference
Data: 15 PDF files, totaling 925 pages, containing elements such as tables, formulas, seals, charts, etc.
Env: NVIDIA Tesla V100 + Intel Xeon Gold 6271C
Pipeline Configurations | Average time per page (s) | Average CPU (%) | Peak RAM Usage (MB) | Average RAM Usage (MB) | Average GPU (%) | Peak VRAM Usage (MB) | Average VRAM Usage (MB) | |
PP-StructureV3 | Basic | 1.77 | 111.4 | 6822.4 | 5278.2 | 38.9 | 17403 | 16909.3 |
Use chart recognition pipeline | 4.09 | 105.3 | 5628 | 4085.1 | 24.7 | 17403 | 17030.9 | |
Use PP-OCRv5_mobile_det + PP-OCRv5_mobile_rec | 1.56 | 113.7 | 6712.9 | 5052 | 29.1 | 10929 | 10840.7 | |
Use PP-FormulaNet_plus-M | 1.42 | 112.9 | 6944.1 | 5193.6 | 38 | 16390 | 15840 | |
Use PP-OCRv5_mobile_det + PP-OCRv5_mobile_rec + PP-FormulaNet_plus-M | 1.15 | 114.8 | 6666.5 | 5105.4 | 26.1 | 8606 | 8517.2 | |
Use PP-OCRv5_mobile_det + PP-OCRv5_mobile_rec + PP-FormulaNet_plus-M, and max input length of text detection set to 1200 | 0.99 | 113 | 7172.9 | 5686.4 | 29.2 | 8776 | 8680.8 | |
MinerU | - | 1.57 | 142.9 | 13655.8 | 12083 | 43.3 | 32406 | 9915.4 |
Env:NVIDIA A100 + Intel Xeon Platinum 8350C
Pipeline Configurations | Average time per page (s) | Average CPU (%) | Peak RAM Usage (MB) | Average RAM Usage (MB) | Average GPU (%) | Peak VRAM Usage (MB) | Average VRAM Usage (MB) | |
PP-StructureV3 | Basic | 1.12 | 109.8 | 9418.3 | 7977.9 | 29.8 | 22294 | 21638.4 |
Use chart recognition pipeline | 2.76 | 103.7 | 9253.6 | 7840.6 | 24 | 22298 | 21555.3 | |
Use PP-OCRv5_mobile_det + PP-OCRv5_mobile_rec | 1.04 | 110.7 | 9520.8 | 8034.3 | 22 | 12490 | 12383.1 | |
Use PP-FormulaNet_plus-M | 0.95 | 111.4 | 9272.9 | 7939.9 | 28.1 | 22350 | 21498.4 | |
Use PP-OCRv5_mobile_det + PP-OCRv5_mobile_rec + PP-FormulaNet_plus-M | 0.89 | 112.1 | 9457.2 | 8031.5 | 18.5 | 11642 | 11433.6 | |
Use PP-OCRv5_mobile_det + PP-OCRv5_mobile_rec + PP-FormulaNet_plus-M, and max length of text detection set to 1200 | 0.64 | 113.5 | 10401.1 | 8688.8 | 23.7 | 11716 | 11453.9 | |
MinerU | - | 1.06 | 168.3 | 18690.4 | 17213.8 | 27.5 | 78760 | 15119 |
- Serving
Data: 1500 images, including tables, formulas, seals, charts, and other elements. Use default configuration.
Instances Number | Concurrent Requests Number | Throughput | Average Latency (s) | Success Number/Total Number |
4 GPUs * 1 | 4 | 1.69 | 2.36 | 1 |
4 GPUs * 4 | 16 | 4.05 | 3.87 | 1 |
PP-StructureV3 Demo¶

FAQ¶
- What is the default configuration? How to get higher accuracy, faster speed, or smaller GPU memory?
When using mobile OCR models + PP-FormulaNet_plus-M, and max length of text detection set to 1200, if set use_chart_recognition to False and dont not load the chart recognition model, the GPU memory would be reduced.
On the V100, the peak and average GPU memory would be reduced from 8776.0 MB and 8680.8 MB to 6118.0 MB and 6016.7 MB, respectively; On the A100, the peak and average GPU memory would be reduced from 11716.0 MB and 11453.9 MB to 9850.0 MB and 9593.5 MB, respectively.
You can using multi-gpus by setting device
to gpu:<no.>,<no.>
, such as gpu:0,1,2,3
. And about multi-process parallel inference, you can refer: Multi-Process Parallel Inference.
- About serving deployment
(1) Can the service handle requests concurrently?
For the basic serving deployment solution, the service processes only one request at a time. This plan is mainly used for rapid verification, to establish the development chain, or for scenarios where concurrent requests are not required.
For high-stability serving deployment solution, the service process only one request at a time by default, but you can refer to the related docs to adjust achieve scaling.
(2)How to reduce latency and improve throughput?
Use the High-performance inference plugin, and deploy multi instances.