Text Detection Module Usage Guide¶
1. Overview¶
The text detection module is a critical component of OCR (Optical Character Recognition) systems, responsible for locating and marking text-containing regions in images. The performance of this module directly impacts the accuracy and efficiency of the entire OCR system. The text detection module typically outputs bounding boxes for text regions, which are then passed to the text recognition module for further processing.
2. Supported Models List¶
Model | Model Download Link | Detection Hmean (%) | GPU Inference Time (ms) [Standard Mode / High-Performance Mode] |
CPU Inference Time (ms) [Standard Mode / High-Performance Mode] |
Model Size (MB) | Description |
---|---|---|---|---|---|---|
PP-OCRv5_server_det | Inference Model/Training Model | 83.8 | 89.55 / 70.19 | 371.65 / 371.65 | 84.3 | PP-OCRv5 server-side text detection model with higher accuracy, suitable for deployment on high-performance servers |
PP-OCRv5_mobile_det | Inference Model/Training Model | 79.0 | 8.79 / 3.13 | 51.00 / 28.58 | 4.7 | PP-OCRv5 mobile-side text detection model with higher efficiency, suitable for deployment on edge devices |
PP-OCRv4_server_det | Inference Model/Training Model | 69.2 | 83.34 / 80.91 | 442.58 / 442.58 | 109 | PP-OCRv4 server-side text detection model with higher accuracy, suitable for deployment on high-performance servers |
PP-OCRv4_mobile_det | Inference Model/Training Model | 63.8 | 8.79 / 3.13 | 51.00 / 28.58 | 4.7 | PP-OCRv4 mobile-side text detection model with higher efficiency, suitable for deployment on edge devices |
Testing Environment:
- Performance Testing Environment
- Test Dataset: PaddleOCR3.0 newly constructed multilingual dataset (including Chinese, Traditional Chinese, English, Japanese), covering street scenes, web images, documents, handwriting, blur, rotation, distortion, etc., totaling 2677 images.
- Hardware Configuration:
- GPU: NVIDIA Tesla T4
- CPU: Intel Xeon Gold 6271C @ 2.60GHz
- Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
- Inference Mode Description
Mode | GPU Configuration | CPU Configuration | Acceleration Techniques |
---|---|---|---|
Standard Mode | FP32 precision / No TRT acceleration | FP32 precision / 8 threads | PaddleInference |
High-Performance Mode | Optimal combination of precision types and acceleration strategies | FP32 precision / 8 threads | Optimal backend selection (Paddle/OpenVINO/TRT, etc.) |
3. Quick Start¶
❗ Before starting, please install the PaddleOCR wheel package. Refer to the Installation Guide for details.
Use the following command for a quick experience:
paddleocr text_detection -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/general_ocr_001.png
You can also integrate the model inference into your project. Before running the following code, download the example image locally.
from paddleocr import TextDetection
model = TextDetection(model_name="PP-OCRv5_server_det")
output = model.predict("general_ocr_001.png", batch_size=1)
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/res.json")
The output will be:
{'res': {'input_path': 'general_ocr_001.png', 'page_index': None, 'dt_polys': array([[[ 75, 549],
...,
[ 77, 586]],
...,
[[ 31, 406],
...,
[ 34, 455]]], dtype=int16), 'dt_scores': [0.873949039891189, 0.8948166013613552, 0.8842595305917041, 0.876953790920377]}}
Output parameter meanings:
- input_path
: Path of the input image.
- page_index
: If the input is a PDF, this indicates the current page number; otherwise, it is None
.
- dt_polys
: Predicted text detection boxes, where each box contains four vertices (x, y coordinates).
- dt_scores
: Confidence scores of the predicted text detection boxes.
Visualization example:
Method and parameter descriptions:
- Instantiate the text detection model (e.g.,
PP-OCRv5_server_det
):
Parameter | Description | Type | Default |
---|---|---|---|
model_name |
Model name. All supported seal text detection model names, such as PP-OCRv5_mobile_det . |
str |
None |
model_dir |
Model storage path | str |
None |
device |
Device(s) to use for inference. Examples: cpu , gpu , npu , gpu:0 , gpu:0,1 .If multiple devices are specified, inference will be performed in parallel. Note that parallel inference is not always supported. By default, GPU 0 will be used if available; otherwise, the CPU will be used. |
str |
None |
enable_hpi |
Whether to use the high performance inference. | bool |
False |
use_tensorrt |
Whether to use the Paddle Inference TensorRT subgraph engine. For Paddle with CUDA version 11.8, the compatible TensorRT version is 8.x (x>=6), and it is recommended to install TensorRT 8.6.1.6. For Paddle with CUDA version 12.6, the compatible TensorRT version is 10.x (x>=5), and it is recommended to install TensorRT 10.5.0.18. | bool |
False |
min_subgraph_size |
Minimum subgraph size for TensorRT when using the Paddle Inference TensorRT subgraph engine. | int |
3 |
precision |
Precision for TensorRT when using the Paddle Inference TensorRT subgraph engine. Options: fp32 , fp16 , etc. |
str |
fp32 |
enable_mkldnn |
Whether to enable MKL-DNN acceleration for inference. If MKL-DNN is unavailable or the model does not support it, acceleration will not be used even if this flag is set. | bool |
True |
cpu_threads |
Number of threads to use for inference on CPUs. | int |
10 |
limit_side_len |
Limit on the side length of the input image for detection. int specifies the value. If set to None , the default value from the official PaddleOCR model configuration will be used. |
int / None |
None |
limit_type |
Type of image side length limitation. "min" ensures the shortest side of the image is no less than det_limit_side_len ; "max" ensures the longest side is no greater than limit_side_len . If set to None , the default value from the official PaddleOCR model configuration will be used. |
str / None |
None |
thresh |
Pixel score threshold. Pixels in the output probability map with scores greater than this threshold are considered text pixels. Accepts any float value greater than 0. If set to None , the default value from the official PaddleOCR model configuration will be used. |
float / None |
None |
box_thresh |
If the average score of all pixels inside the bounding box is greater than this threshold, the result is considered a text region. Accepts any float value greater than 0. If set to None , the default value from the official PaddleOCR model configuration will be used. |
float / None |
None |
unclip_ratio |
Expansion ratio for the Vatti clipping algorithm, used to expand the text region. Accepts any float value greater than 0. If set to None , the default value from the official PaddleOCR model configuration will be used. |
float / None |
None |
input_shape |
Input image size for the model in the format (C, H, W) . If set to None , the model's default size will be used. |
tuple / None |
None |
- The
predict()
method parameters:
Parameter | Description | Type | Default |
---|---|---|---|
input |
Input data to be predicted. Required. Supports multiple input types:
|
Python Var / str / dict / list |
|
batch_size |
Batch size, positive integer. | int |
1 |
limit_side_len |
Limit on the side length of the input image for detection. int specifies the value. If set to None , the parameter value initialized by the model will be used by default. |
int / None |
None |
limit_type |
Type of image side length limitation. "min" ensures the shortest side of the image is no less than det_limit_side_len ; "max" ensures the longest side is no greater than limit_side_len . If set to None , the parameter value initialized by the model will be used by default. |
str / None |
None |
thresh |
Pixel score threshold. Pixels in the output probability map with scores greater than this threshold are considered text pixels. Accepts any float value greater than 0. If set to None , the parameter value initialized by the model will be used by default. |
float / None |
None |
box_thresh |
If the average score of all pixels inside the bounding box is greater than this threshold, the result is considered a text region. Accepts any float value greater than 0. If set to None , the parameter value initialized by the model will be used by default. |
float / None |
None |
unclip_ratio |
Expansion ratio for the Vatti clipping algorithm, used to expand the text region. Accepts any float value greater than 0. If set to None , the parameter value initialized by the model will be used by default. |
float / None |
None |
- Result processing methods:
Method | Description | Parameters | Type | Description | Default |
---|---|---|---|---|---|
print() |
Print results to terminal | format_json |
bool |
Format output as JSON | True |
indent |
int |
JSON indentation level | 4 | ||
ensure_ascii |
bool |
Escape non-ASCII characters | False |
||
save_to_json() |
Save results as JSON file | save_path |
str |
Output file path | Required |
indent |
int |
JSON indentation level | 4 | ||
ensure_ascii |
bool |
Escape non-ASCII characters | False |
||
save_to_img() |
Save results as image | save_path |
str |
Output file path | Required |
- Additional attributes:
Attribute | Description |
---|---|
json |
Get prediction results in JSON format |
img |
Get visualization image as a dictionary |
4. Custom Development¶
If the above models do not meet your requirements, follow these steps for custom development (using PP-OCRv5_server_det
as an example). First, prepare a text detection dataset (refer to the Demo Dataset format). After preparation, proceed with model training and export. The exported model can be integrated into the API. Ensure PaddleOCR dependencies are installed as per the Installation Guide.
4.1 Dataset and Pretrained Model Preparation¶
4.1.1 Prepare Dataset¶
# Download example dataset
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/data/ocr_det_dataset_examples.tar
tar -xf ocr_det_dataset_examples.tar
4.1.2 Download Pretrained Model¶
# Download PP-OCRv5_server_det pretrained model
wget https://paddle-model-ecology.bj.bcebos.com/paddlex/official_pretrained_model/PP-OCRv5_server_det_pretrained.pdparams
4.2 Model Training¶
PaddleOCR modularizes the code. To train the PP-OCRv5_server_det
model, use its configuration file.
Training command:
# Single-GPU training (default)
python3 tools/train.py -c configs/det/PP-OCRv5/PP-OCRv5_server_det.yml \
-o Global.pretrained_model=./PP-OCRv5_server_det_pretrained.pdparams \
Train.dataset.data_dir=./ocr_det_dataset_examples \
Train.dataset.label_file_list='[./ocr_det_dataset_examples/train.txt]' \
Eval.dataset.data_dir=./ocr_det_dataset_examples \
Eval.dataset.label_file_list='[./ocr_det_dataset_examples/val.txt]'
# Multi-GPU training (specify GPUs with --gpus)
python3 -m paddle.distributed.launch --gpus '0,1,2,3' tools/train.py \
-c configs/det/PP-OCRv5/PP-OCRv5_server_det.yml \
-o Global.pretrained_model=./PP-OCRv5_server_det_pretrained.pdparams \
Train.dataset.data_dir=./ocr_det_dataset_examples \
Train.dataset.label_file_list='[./ocr_det_dataset_examples/train.txt]' \
Eval.dataset.data_dir=./ocr_det_dataset_examples \
Eval.dataset.label_file_list='[./ocr_det_dataset_examples/val.txt]'
4.3 Model Evaluation¶
You can evaluate trained weights (e.g., output/PP-OCRv5_server_det/best_accuracy.pdparams
) using the following command:
# Note: Set pretrained_model to local path. For custom-trained models, modify the path and filename as {path/to/weights}/{model_name}.
# Demo dataset evaluation
python3 tools/eval.py -c configs/det/PP-OCRv5/PP-OCRv5_server_det.yml \
-o Global.pretrained_model=output/PP-OCRv5_server_det/best_accuracy.pdparams \
Eval.dataset.data_dir=./ocr_det_dataset_examples \
Eval.dataset.label_file_list='[./ocr_det_dataset_examples/val.txt]'
4.4 Model Export¶
python3 tools/export_model.py -c configs/det/PP-OCRv5/PP-OCRv5_server_det.yml -o \
Global.pretrained_model=output/PP-OCRv5_server_det/best_accuracy.pdparams \
Global.save_inference_dir="./PP-OCRv5_server_det_infer/"
After export, the static graph model will be saved in ./PP-OCRv5_server_det_infer/
with the following files:
5. FAQ¶
- Use parameters
limit_type
andlimit_side_len
to constrain image dimensions. limit_type
options: [max
,min
]limit_side_len
: Positive integer (typically multiples of 32, e.g., 960).- For lower-resolution images, use
limit_type=min
andlimit_side_len=960
to balance computational efficiency and detection quality. - For higher-resolution images requiring larger detection scales, set
limit_side_len
to desired values (e.g., 1216).