PaddleX Model List(CPU/GPU)¶
PaddleX incorporates multiple pipelines, each containing several modules, and each module includes various models. You can choose which models to use based on the benchmark data below. If you prioritize model accuracy, select models with higher accuracy. If you prioritize inference speed, choose models with faster inference. If you prioritize model storage size, select models with smaller storage sizes.
Image Classification Module¶
Model Name | Top-1 Acc (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
CLIP_vit_base_patch16_224 | 85.36 | 13.1957 | 285.493 | 306.5 M | CLIP_vit_base_patch16_224.yaml | Inference Model/Trained Model |
CLIP_vit_large_patch14_224 | 88.1 | 51.1284 | 1131.28 | 1.04 G | CLIP_vit_large_patch14_224.yaml | Inference Model/Trained Model |
ConvNeXt_base_224 | 83.84 | 12.8473 | 1513.87 | 313.9 M | ConvNeXt_base_224.yaml | Inference Model/Trained Model |
ConvNeXt_base_384 | 84.90 | 31.7607 | 3967.05 | 313.9 M | ConvNeXt_base_384.yaml | Inference Model/Trained Model |
ConvNeXt_large_224 | 84.26 | 26.8103 | 2463.56 | 700.7 M | ConvNeXt_large_224.yaml | Inference Model/Trained Model |
ConvNeXt_large_384 | 85.27 | 66.4058 | 6598.92 | 700.7 M | ConvNeXt_large_384.yaml | Inference Model/Trained Model |
ConvNeXt_small | 83.13 | 9.74075 | 1127.6 | 178.0 M | ConvNeXt_small.yaml | Inference Model/Trained Model |
ConvNeXt_tiny | 82.03 | 5.48923 | 672.559 | 101.4 M | ConvNeXt_tiny.yaml | Inference Model/Trained Model |
FasterNet-L | 83.5 | 23.4415 | - | 357.1 M | FasterNet-L.yaml | Inference Model/Trained Model |
FasterNet-M | 83.0 | 21.8936 | - | 204.6 M | FasterNet-M.yaml | Inference Model/Trained Model |
FasterNet-S | 81.3 | 13.0409 | - | 119.3 M | FasterNet-S.yaml | Inference Model/Trained Model |
FasterNet-T0 | 71.9 | 12.2432 | - | 15.1 M | FasterNet-T0.yaml | Inference Model/Trained Model |
FasterNet-T1 | 75.9 | 11.3562 | - | 29.2 M | FasterNet-T1.yaml | Inference Model/Trained Model |
FasterNet-T2 | 79.1 | 10.703 | - | 57.4 M | FasterNet-T2.yaml | Inference Model/Trained Model |
MobileNetV1_x0_5 | 63.5 | 1.86754 | 7.48297 | 4.8 M | MobileNetV1_x0_5.yaml | Inference Model/Trained Model |
MobileNetV1_x0_25 | 51.4 | 1.83478 | 4.83674 | 1.8 M | MobileNetV1_x0_25.yaml | Inference Model/Trained Model |
MobileNetV1_x0_75 | 68.8 | 2.57903 | 10.6343 | 9.3 M | MobileNetV1_x0_75.yaml | Inference Model/Trained Model |
MobileNetV1_x1_0 | 71.0 | 2.78781 | 13.98 | 15.2 M | MobileNetV1_x1_0.yaml | Inference Model/Trained Model |
MobileNetV2_x0_5 | 65.0 | 4.94234 | 11.1629 | 7.1 M | MobileNetV2_x0_5.yaml | Inference Model/Trained Model |
MobileNetV2_x0_25 | 53.2 | 4.50856 | 9.40991 | 5.5 M | MobileNetV2_x0_25.yaml | Inference Model/Trained Model |
MobileNetV2_x1_0 | 72.2 | 6.12159 | 16.0442 | 12.6 M | MobileNetV2_x1_0.yaml | Inference Model/Trained Model |
MobileNetV2_x1_5 | 74.1 | 6.28385 | 22.5129 | 25.0 M | MobileNetV2_x1_5.yaml | Inference Model/Trained Model |
MobileNetV2_x2_0 | 75.2 | 6.12888 | 30.8612 | 41.2 M | MobileNetV2_x2_0.yaml | Inference Model/Trained Model |
MobileNetV3_large_x0_5 | 69.2 | 6.31302 | 14.5588 | 9.6 M | MobileNetV3_large_x0_5.yaml | Inference Model/Trained Model |
MobileNetV3_large_x0_35 | 64.3 | 5.76207 | 13.9041 | 7.5 M | MobileNetV3_large_x0_35.yaml | Inference Model/Trained Model |
MobileNetV3_large_x0_75 | 73.1 | 8.41737 | 16.9506 | 14.0 M | MobileNetV3_large_x0_75.yaml | Inference Model/Trained Model |
MobileNetV3_large_x1_0 | 75.3 | 8.64112 | 19.1614 | 19.5 M | MobileNetV3_large_x1_0.yaml | Inference Model/Trained Model |
MobileNetV3_large_x1_25 | 76.4 | 8.73358 | 22.1296 | 26.5 M | MobileNetV3_large_x1_25.yaml | Inference Model/Trained Model |
MobileNetV3_small_x0_5 | 59.2 | 5.16721 | 11.2688 | 6.8 M | MobileNetV3_small_x0_5.yaml | Inference Model/Trained Model |
MobileNetV3_small_x0_35 | 53.0 | 5.22053 | 11.0055 | 6.0 M | MobileNetV3_small_x0_35.yaml | Inference Model/Trained Model |
MobileNetV3_small_x0_75 | 66.0 | 5.39831 | 12.8313 | 8.5 M | MobileNetV3_small_x0_75.yaml | Inference Model/Trained Model |
MobileNetV3_small_x1_0 | 68.2 | 6.00993 | 12.9598 | 10.5 M | MobileNetV3_small_x1_0.yaml | Inference Model/Trained Model |
MobileNetV3_small_x1_25 | 70.7 | 6.9589 | 14.3995 | 13.0 M | MobileNetV3_small_x1_25.yaml | Inference Model/Trained Model |
MobileNetV4_conv_large | 83.4 | 12.5485 | 51.6453 | 125.2 M | MobileNetV4_conv_large.yaml | Inference Model/Trained Model |
MobileNetV4_conv_medium | 79.9 | 9.65509 | 26.6157 | 37.6 M | MobileNetV4_conv_medium.yaml | Inference Model/Trained Model |
MobileNetV4_conv_small | 74.6 | 5.24172 | 11.0893 | 14.7 M | MobileNetV4_conv_small.yaml | Inference Model/Trained Model |
MobileNetV4_hybrid_large | 83.8 | 20.0726 | 213.769 | 145.1 M | MobileNetV4_hybrid_large.yaml | Inference Model/Trained Model |
MobileNetV4_hybrid_medium | 80.5 | 19.7543 | 62.2624 | 42.9 M | MobileNetV4_hybrid_medium.yaml | Inference Model/Trained Model |
PP-HGNet_base | 85.0 | 14.2969 | 327.114 | 249.4 M | PP-HGNet_base.yaml | Inference Model/Trained Model |
PP-HGNet_small | 81.51 | 5.50661 | 119.041 | 86.5 M | PP-HGNet_small.yaml | Inference Model/Trained Model |
PP-HGNet_tiny | 79.83 | 5.22006 | 69.396 | 52.4 M | PP-HGNet_tiny.yaml | Inference Model/Trained Model |
PP-HGNetV2-B0 | 77.77 | 6.53694 | 23.352 | 21.4 M | PP-HGNetV2-B0.yaml | Inference Model/Trained Model |
PP-HGNetV2-B1 | 79.18 | 6.56034 | 27.3099 | 22.6 M | PP-HGNetV2-B1.yaml | Inference Model/Trained Model |
PP-HGNetV2-B2 | 81.74 | 9.60494 | 43.1219 | 39.9 M | PP-HGNetV2-B2.yaml | Inference Model/Trained Model |
PP-HGNetV2-B3 | 82.98 | 11.0042 | 55.1367 | 57.9 M | PP-HGNetV2-B3.yaml | Inference Model/Trained Model |
PP-HGNetV2-B4 | 83.57 | 9.66407 | 54.2462 | 70.4 M | PP-HGNetV2-B4.yaml | Inference Model/Trained Model |
PP-HGNetV2-B5 | 84.75 | 15.7091 | 115.926 | 140.8 M | PP-HGNetV2-B5.yaml | Inference Model/Trained Model |
PP-HGNetV2-B6 | 86.30 | 21.226 | 255.279 | 268.4 M | PP-HGNetV2-B6.yaml | Inference Model/Trained Model |
PP-LCNet_x0_5 | 63.14 | 3.67722 | 6.66857 | 6.7 M | PP-LCNet_x0_5.yaml | Inference Model/Trained Model |
PP-LCNet_x0_25 | 51.86 | 2.65341 | 5.81357 | 5.5 M | PP-LCNet_x0_25.yaml | Inference Model/Trained Model |
PP-LCNet_x0_35 | 58.09 | 2.7212 | 6.28944 | 5.9 M | PP-LCNet_x0_35.yaml | Inference Model/Trained Model |
PP-LCNet_x0_75 | 68.18 | 3.91032 | 8.06953 | 8.4 M | PP-LCNet_x0_75.yaml | Inference Model/Trained Model |
PP-LCNet_x1_0 | 71.32 | 3.84845 | 9.23735 | 10.5 M | PP-LCNet_x1_0.yaml | Inference Model/Trained Model |
PP-LCNet_x1_5 | 73.71 | 3.97666 | 12.3457 | 16.0 M | PP-LCNet_x1_5.yaml | Inference Model/Trained Model |
PP-LCNet_x2_0 | 75.18 | 4.07556 | 16.2752 | 23.2 M | PP-LCNet_x2_0.yaml | Inference Model/Trained Model |
PP-LCNet_x2_5 | 76.60 | 4.06028 | 21.5063 | 32.1 M | PP-LCNet_x2_5.yaml | Inference Model/Trained Model |
PP-LCNetV2_base | 77.05 | 5.23428 | 19.6005 | 23.7 M | PP-LCNetV2_base.yaml | Inference Model/Trained Model |
PP-LCNetV2_large | 78.51 | 6.78335 | 30.4378 | 37.3 M | PP-LCNetV2_large.yaml | Inference Model/Trained Model |
PP-LCNetV2_small | 73.97 | 3.89762 | 13.0273 | 14.6 M | PP-LCNetV2_small.yaml | Inference Model/Trained Model |
ResNet18_vd | 72.3 | 3.53048 | 31.3014 | 41.5 M | ResNet18_vd.yaml | Inference Model/Trained Model |
ResNet18 | 71.0 | 2.4868 | 27.4601 | 41.5 M | ResNet18.yaml | Inference Model/Trained Model |
ResNet34_vd | 76.0 | 5.60675 | 56.0653 | 77.3 M | ResNet34_vd.yaml | Inference Model/Trained Model |
ResNet34 | 74.6 | 4.16902 | 51.925 | 77.3 M | ResNet34.yaml | Inference Model/Trained Model |
ResNet50_vd | 79.1 | 10.1885 | 68.446 | 90.8 M | ResNet50_vd.yaml | Inference Model/Trained Model |
ResNet50 | 76.5 | 9.62383 | 64.8135 | 90.8 M | ResNet50.yaml | Inference Model/Trained Model |
ResNet101_vd | 80.2 | 20.0563 | 124.85 | 158.4 M | ResNet101_vd.yaml | Inference Model/Trained Model |
ResNet101 | 77.6 | 19.2297 | 121.006 | 158.7 M | ResNet101.yaml | Inference Model/Trained Model |
ResNet152_vd | 80.6 | 29.6439 | 181.678 | 214.3 M | ResNet152_vd.yaml | Inference Model/Trained Model |
ResNet152 | 78.3 | 30.0461 | 177.707 | 214.2 M | ResNet152.yaml | Inference Model/Trained Model |
ResNet200_vd | 80.9 | 39.1628 | 235.185 | 266.0 M | ResNet200_vd.yaml | Inference Model/Trained Model |
StarNet-S1 | 73.6 | 9.895 | 23.0465 | 11.2 M | StarNet-S1.yaml | Inference Model/Trained Model |
StarNet-S2 | 74.8 | 7.91279 | 21.9571 | 14.3 M | StarNet-S2.yaml | Inference Model/Trained Model |
StarNet-S3 | 77.0 | 10.7531 | 30.7656 | 22.2 M | StarNet-S3.yaml | Inference Model/Trained Model |
StarNet-S4 | 79.0 | 15.2868 | 43.2497 | 28.9 M | StarNet-S4.yaml | Inference Model/Trained Model |
SwinTransformer_base_patch4_window7_224 | 83.37 | 16.9848 | 383.83 | 310.5 M | SwinTransformer_base_patch4_window7_224.yaml | Inference Model/Trained Model |
SwinTransformer_base_patch4_window12_384 | 84.17 | 37.2855 | 1178.63 | 311.4 M | SwinTransformer_base_patch4_window12_384.yaml | Inference Model/Trained Model |
SwinTransformer_large_patch4_window7_224 | 86.19 | 27.5498 | 689.729 | 694.8 M | SwinTransformer_large_patch4_window7_224.yaml | Inference Model/Trained Model |
SwinTransformer_large_patch4_window12_384 | 87.06 | 74.1768 | 2105.22 | 696.1 M | SwinTransformer_large_patch4_window12_384.yaml | Inference Model/Trained Model |
SwinTransformer_small_patch4_window7_224 | 83.21 | 16.3982 | 285.56 | 175.6 M | SwinTransformer_small_patch4_window7_224.yaml | Inference Model/Trained Model |
SwinTransformer_tiny_patch4_window7_224 | 81.10 | 8.54846 | 156.306 | 100.1 M | SwinTransformer_tiny_patch4_window7_224.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are Top-1 Acc on the ImageNet-1k validation set.
Image Multi-Label Classification Module¶
Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
CLIP_vit_base_patch16_448_ML | 89.15 | - | - | 325.6 M | CLIP_vit_base_patch16_448_ML.yaml | Inference Model/Trained Model |
PP-HGNetV2-B0_ML | 80.98 | - | - | 39.6 M | PP-HGNetV2-B0_ML.yaml | Inference Model/Trained Model |
PP-HGNetV2-B4_ML | 87.96 | - | - | 88.5 M | PP-HGNetV2-B4_ML.yaml | Inference Model/Trained Model |
PP-HGNetV2-B6_ML | 91.25 | - | - | 286.5 M | PP-HGNetV2-B6_ML.yaml | Inference Model/Trained Model |
PP-LCNet_x1_0_ML | 77.96 | - | - | 29.4 M | PP-LCNet_x1_0_ML.yaml | Inference Model/Trained Model |
ResNet50_ML | 83.50 | - | - | 108.9 M | ResNet50_ML.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are mAP for the multi-label classification task on COCO2017.
Pedestrian Attribute Module¶
Model Name | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-LCNet_x1_0_pedestrian_attribute | 92.2 | 3.84845 | 9.23735 | 6.7 M | PP-LCNet_x1_0_pedestrian_attribute.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are mA on PaddleX's internal self-built dataset.
Vehicle Attribute Module¶
Model Name | mA (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-LCNet_x1_0_vehicle_attribute | 91.7 | 3.84845 | 9.23735 | 6.7 M | PP-LCNet_x1_0_vehicle_attribute.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are mA on the VeRi dataset.
Image Feature Module¶
Model Name | recall@1 (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-ShiTuV2_rec | 84.2 | 5.23428 | 19.6005 | 16.3 M | PP-ShiTuV2_rec.yaml | Inference Model/Trained Model |
PP-ShiTuV2_rec_CLIP_vit_base | 88.69 | 13.1957 | 285.493 | 306.6 M | PP-ShiTuV2_rec_CLIP_vit_base.yaml | Inference Model/Trained Model |
PP-ShiTuV2_rec_CLIP_vit_large | 91.03 | 51.1284 | 1131.28 | 1.05 G | PP-ShiTuV2_rec_CLIP_vit_large.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are recall@1 on AliProducts.
Document Orientation Classification Module¶
Model Name | Top-1 Acc (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-LCNet_x1_0_doc_ori | 99.26 | 3.84845 | 9.23735 | 7.1 M | PP-LCNet_x1_0_doc_ori.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are Top-1 Acc on PaddleX's internal self-built dataset.
Face Feature Module¶
Model Name | Output Feature Dimension | Acc (%) AgeDB-30/CFP-FP/LFW |
GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | YAML File | Model Download Link |
---|---|---|---|---|---|---|---|
MobileFaceNet | 128 | 96.28/96.71/99.58 | 4.1 | MobileFaceNet.yaml | Inference Model/Trained Model | ||
ResNet50_face | 512 | 98.12/98.56/99.77 | 87.2 | ResNet50_face.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are Accuracy scores measured on the AgeDB-30, CFP-FP, and LFW datasets, respectively.
Main Body Detection Module¶
Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-ShiTuV2_det | 41.5 | 33.7426 | 537.003 | 27.6 M | PP-ShiTuV2_det.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are mAP(0.5:0.95) on the PaddleClas main body detection dataset.
Object Detection Module¶
Note: The above accuracy metrics are mAP(0.5:0.95) on the COCO2017 validation set.
Small Object Detection Module¶
Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-YOLOE_plus_SOD-S | 25.1 | 65.4608 | 324.37 | 77.3 M | PP-YOLOE_plus_SOD-S.yaml | Inference Model/Trained Model |
PP-YOLOE_plus_SOD-L | 31.9 | 57.1448 | 1006.98 | 325.0 M | PP-YOLOE_plus_SOD-L.yaml | Inference Model/Trained Model |
PP-YOLOE_plus_SOD-largesize-L | 42.7 | 458.521 | 11172.7 | 340.5 M | PP-YOLOE_plus_SOD-largesize-L.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are mAP(0.5:0.95) on the VisDrone-DET validation set.
Pedestrian Detection Module¶
Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-YOLOE-L_human | 48.0 | 32.7754 | 777.691 | 196.1 M | PP-YOLOE-L_human.yaml | Inference Model/Trained Model |
PP-YOLOE-S_human | 42.5 | 15.0118 | 179.317 | 28.8 M | PP-YOLOE-S_human.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are mAP(0.5:0.95) on the CrowdHuman validation set.
Vehicle Detection Module¶
Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-YOLOE-L_vehicle | 63.9 | 32.5619 | 775.633 | 196.1 M | PP-YOLOE-L_vehicle.yaml | Inference Model/Trained Model |
PP-YOLOE-S_vehicle | 61.3 | 15.3787 | 178.441 | 28.8 M | PP-YOLOE-S_vehicle.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are mAP(0.5:0.95) on the PPVehicle validation set.
Face Detection Module¶
Model | AP (%) Easy/Medium/Hard |
GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size (M) | YAML File | Model Download Link |
---|---|---|---|---|---|---|
BlazeFace | 77.7/73.4/49.5 | 0.447 | BlazeFace.yaml | Inference Model/Trained Model | ||
BlazeFace-FPN-SSH | 83.2/80.5/60.5 | 0.606 | BlazeFace-FPN-SSH.yaml | Inference Model/Trained Model | ||
PicoDet_LCNet_x2_5_face | 93.7/90.7/68.1 | 28.9 | PicoDet_LCNet_x2_5_face.yaml | Inference Model/Trained Model | ||
PP-YOLOE_plus-S_face | 93.9/91.8/79.8 | 26.5 | PP-YOLOE_plus-S_face | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the WIDER-FACE validation set with an input size of 640*640.
Abnormality Detection Module¶
Model Name | Avg (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
STFPM | 96.2 | - | - | 21.5 M | STFPM.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the MVTec AD dataset using the average anomaly score.
Semantic Segmentation Module¶
Model Name | mIoU (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
Deeplabv3_Plus-R50 | 80.36 | 61.0531 | 1513.58 | 94.9 M | Deeplabv3_Plus-R50.yaml | Inference Model/Trained Model |
Deeplabv3_Plus-R101 | 81.10 | 100.026 | 2460.71 | 162.5 M | Deeplabv3_Plus-R101.yaml | Inference Model/Trained Model |
Deeplabv3-R50 | 79.90 | 82.2631 | 1735.83 | 138.3 M | Deeplabv3-R50.yaml | Inference Model/Trained Model |
Deeplabv3-R101 | 80.85 | 121.492 | 2685.51 | 205.9 M | Deeplabv3-R101.yaml | Inference Model/Trained Model |
OCRNet_HRNet-W18 | 80.67 | 48.2335 | 906.385 | 43.1 M | OCRNet_HRNet-W18.yaml | Inference Model/Trained Model |
OCRNet_HRNet-W48 | 82.15 | 78.9976 | 2226.95 | 249.8 M | OCRNet_HRNet-W48.yaml | Inference Model/Trained Model |
PP-LiteSeg-T | 73.10 | 7.6827 | 138.683 | 28.5 M | PP-LiteSeg-T.yaml | Inference Model/Trained Model |
PP-LiteSeg-B | 75.25 | 10.9935 | 194.727 | 47.0 M | PP-LiteSeg-B.yaml | Inference Model/Trained Model |
SegFormer-B0 (slice) | 76.73 | 11.1946 | 268.929 | 13.2 M | SegFormer-B0.yaml | Inference Model/Trained Model |
SegFormer-B1 (slice) | 78.35 | 17.9998 | 403.393 | 48.5 M | SegFormer-B1.yaml | Inference Model/Trained Model |
SegFormer-B2 (slice) | 81.60 | 48.0371 | 1248.52 | 96.9 M | SegFormer-B2.yaml | Inference Model/Trained Model |
SegFormer-B3 (slice) | 82.47 | 64.341 | 1666.35 | 167.3 M | SegFormer-B3.yaml | Inference Model/Trained Model |
SegFormer-B4 (slice) | 82.38 | 82.4336 | 1995.42 | 226.7 M | SegFormer-B4.yaml | Inference Model/Trained Model |
SegFormer-B5 (slice) | 82.58 | 97.3717 | 2420.19 | 229.7 M | SegFormer-B5.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the Cityscapes dataset using mIoU.
Model Name | mIoU (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
SeaFormer_base(slice) | 40.92 | 24.4073 | 397.574 | 30.8 M | SeaFormer_base.yaml | Inference Model/Trained Model |
SeaFormer_large (slice) | 43.66 | 27.8123 | 550.464 | 49.8 M | SeaFormer_large.yaml | Inference Model/Trained Model |
SeaFormer_small (slice) | 38.73 | 19.2295 | 358.343 | 14.3 M | SeaFormer_small.yaml | Inference Model/Trained Model |
SeaFormer_tiny (slice) | 34.58 | 13.9496 | 330.132 | 6.1 M | SeaFormer_tiny.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the ADE20k dataset. "slice" indicates that the input image has been cropped.
Instance Segmentation Module¶
|SOLOv2| 35.5|-|-|179.1 M|SOLOv2.yaml
Note: The above accuracy metrics are evaluated on the COCO2017 validation set using Mask AP(0.5:0.95).
Text Detection Module¶
Model Name | Detection Hmean (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-OCRv4_mobile_det | 77.79 | 10.6923 | 120.177 | 4.2 M | PP-OCRv4_mobile_det.yaml | Inference Model/Trained Model |
PP-OCRv4_server_det | 82.69 | 83.3501 | 2434.01 | 100.1M | PP-OCRv4_server_det.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on a self-built Chinese dataset by PaddleOCR, covering street scenes, web images, documents, and handwritten texts, with 500 images for detection.
Seal Text Detection Module¶
Model Name | Detection Hmean (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-OCRv4_mobile_seal_det | 96.47 | 10.5878 | 131.813 | 4.7 M | PP-OCRv4_mobile_seal_det.yaml | Inference Model/Trained Model |
PP-OCRv4_server_seal_det | 98.21 | 84.341 | 2425.06 | 108.3 M | PP-OCRv4_server_seal_det.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on a self-built seal dataset by PaddleX, containing 500 seal images.
Text Recognition Module¶
Model Name | Recognition Avg Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PP-OCRv4_mobile_rec | 78.20 | 7.95018 | 46.7868 | 10.6 M | PP-OCRv4_mobile_rec.yaml | Inference Model/Trained Model |
PP-OCRv4_server_rec | 79.20 | 7.19439 | 140.179 | 71.2 M | PP-OCRv4_server_rec.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on a self-built Chinese dataset by PaddleOCR, covering street scenes, web images, documents, and handwritten texts, with 11,000 images for text recognition.
Model Name | Recognition Avg Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
ch_SVTRv2_rec | 68.81 | 8.36801 | 165.706 | 73.9 M | ch_SVTRv2_rec.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition A-Rank.
Model Name | Recognition Avg Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
ch_RepSVTR_rec | 65.07 | 10.5047 | 51.5647 | 22.1 M | ch_RepSVTR_rec.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition B-Rank.
Formula Recognition Module¶
Model Name | BLEU Score | Normed Edit Distance | ExpRate (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|---|---|
LaTeX_OCR_rec | 0.8821 | 0.0823 | 40.01 | - | - | 89.7 M | LaTeX_OCR_rec.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the LaTeX-OCR formula recognition test set.
Table Structure Recognition Module¶
Model Name | Accuracy (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
SLANet | 59.52 | 522.536 | 1845.37 | 6.9 M | SLANet.yaml | Inference Model/Trained Model |
SLANet_plus | 63.69 | 522.536 | 1845.37 | 6.9 M | SLANet_plus.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on a self-built English table recognition dataset by PaddleX.
Image Rectification Module¶
Model Name | MS-SSIM (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
UVDoc | 54.40 | - | - | 30.3 M | UVDoc.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on a self-built image rectification dataset by PaddleX.
Layout Detection Module¶
Model Name | mAP (%) | GPU Inference Time (ms) | CPU Inference Time (ms) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
PicoDet_layout_1x | 86.8 | 13.036 | 91.2634 | 7.4 M | PicoDet_layout_1x.yaml | Inference Model/Trained Model |
PicoDet-S_layout_3cls | 87.1 | 13.521 | 45.7633 | 4.8 M | PicoDet-S_layout_3cls.yaml | Inference Model/Trained Model |
PicoDet-S_layout_17cls | 70.3 | 13.5632 | 46.2059 | 4.8 M | PicoDet-S_layout_17cls.yaml | Inference Model/Trained Model |
PicoDet-L_layout_3cls | 89.3 | 15.7425 | 159.771 | 22.6 M | PicoDet-L_layout_3cls.yaml | Inference Model/Trained Model |
PicoDet-L_layout_17cls | 79.9 | 17.1901 | 160.262 | 22.6 M | PicoDet-L_layout_17cls.yaml | Inference Model/Trained Model |
RT-DETR-H_layout_3cls | 95.9 | 114.644 | 3832.62 | 470.1 M | RT-DETR-H_layout_3cls.yaml | Inference Model/Trained Model |
RT-DETR-H_layout_17cls | 92.6 | 115.126 | 3827.25 | 470.2 M | RT-DETR-H_layout_17cls.yaml | Inference Model/Trained Model |
Note: The evaluation set for the above accuracy metrics is the PaddleX self-built Layout Detection Dataset, containing 10,000 images.
Time Series Forecasting Module¶
Model Name | mse | mae | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|
DLinear | 0.382 | 0.394 | 72 K | DLinear.yaml | Inference Model/Trained Model |
NLinear | 0.386 | 0.392 | 40 K | NLinear.yaml | Inference Model/Trained Model |
Nonstationary | 0.600 | 0.515 | 55.5 M | Nonstationary.yaml | Inference Model/Trained Model |
PatchTST | 0.385 | 0.397 | 2.0 M | PatchTST.yaml | Inference Model/Trained Model |
RLinear | 0.384 | 0.392 | 40 K | RLinear.yaml | Inference Model/Trained Model |
TiDE | 0.405 | 0.412 | 31.7 M | TiDE.yaml | Inference Model/Trained Model |
TimesNet | 0.417 | 0.431 | 4.9 M | TimesNet.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the ETTH1 dataset (evaluation results on the test set test.csv).
Time Series Anomaly Detection Module¶
Model Name | Precision | Recall | f1_score | Model Size | YAML File | Model Download Link |
---|---|---|---|---|---|---|
AutoEncoder_ad | 99.36 | 84.36 | 91.25 | 52 K | AutoEncoder_ad.yaml | Inference Model/Trained Model |
DLinear_ad | 98.98 | 93.96 | 96.41 | 112 K | DLinear_ad.yaml | Inference Model/Trained Model |
Nonstationary_ad | 98.55 | 88.95 | 93.51 | 1.8 M | Nonstationary_ad.yaml | Inference Model/Trained Model |
PatchTST_ad | 98.78 | 90.70 | 94.57 | 320 K | PatchTST_ad.yaml | Inference Model/Trained Model |
TimesNet_ad | 98.37 | 94.80 | 96.56 | 1.3 M | TimesNet_ad.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the PSM dataset.
Time Series Classification Module¶
Model Name | acc (%) | Model Size | YAML File | Model Download Link |
---|---|---|---|---|
TimesNet_cls | 87.5 | 792 K | TimesNet_cls.yaml | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the UWaveGestureLibrary dataset.
>Note: All GPU inference times for the above models are based on an NVIDIA Tesla T4 machine with FP32 precision. CPU inference speeds are based on an Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz with 8 threads and FP32 precision.