Skip to content

PaddleX Model List (Enflame GCU)

PaddleX incorporates multiple pipelines, each containing several modules, and each module encompasses various models. You can select the appropriate models based on the benchmark data below. If you prioritize model accuracy, choose models with higher accuracy. If you prioritize model size, select models with smaller storage requirements.

Image Classification Module

Model Name Top-1 Accuracy (%) Model Size (M) Model Download Link
ConvNeXt_base_224 83.84 313.9 M Inference Model/Trained Model
ConvNeXt_base_384 84.90 313.9 M Inference Model/Trained Model
ConvNeXt_large_224 84.26 700.7 M Inference Model/Trained Model
ConvNeXt_large_384 85.27 700.7 M Inference Model/Trained Model
ConvNeXt_small 83.13 178.0 M Inference Model/Trained Model
ConvNeXt_tiny 82.03 101.4 M Inference Model/Trained Model
FasterNet-L 83.5 357.1 M Inference Model/Trained Model
FasterNet-M 82.9 204.6 M Inference Model/Trained Model
FasterNet-S 81.3 119.3 M Inference Model/Trained Model
FasterNet-T0 71.8 15.1 M Inference Model/Trained Model
FasterNet-T1 76.2 29.2 M Inference Model/Trained Model
FasterNet-T2 78.8 57.4 M Inference Model/Trained Model
MobileNetV1_x0_25 51.4 1.8 M Inference Model/Trained Model
MobileNetV1_x0_5 63.5 4.8 M Inference Model/Trained Model
MobileNetV1_x0_75 68.8 9.3 M Inference Model/Trained Model
MobileNetV1_x1_0 71.0 15.2 M Inference Model/Trained Model
MobileNetV2_x0_25 53.2 5.5 M Inference Model/Trained Model
MobileNetV2_x0_5 65.0 7.1 M Inference Model/Trained Model
MobileNetV2_x1_0 72.2 12.6 M Inference Model/Trained Model
MobileNetV2_x1_5 74.1 25.0 M Inference Model/Trained Model
MobileNetV2_x2_0 75.2 41.2 M Inference Model/Trained Model
MobileNetV3_large_x0_35 64.3 7.5 M Inference Model/Trained Model
MobileNetV3_large_x0_5 69.2 9.6 M Inference Model/Trained Model
MobileNetV3_large_x0_75 73.1 14.0 M Inference Model/Trained Model
MobileNetV3_large_x1_0 75.3 19.5 M Inference Model/Trained Model
MobileNetV3_large_x1_25 76.4 26.5 M Inference Model/Trained Model
MobileNetV3_small_x0_35 53.0 6.0 M Inference Model/Trained Model
MobileNetV3_small_x0_5 59.2 6.8 M Inference Model/Trained Model
MobileNetV3_small_x0_75 66.0 8.5 M Inference Model/Trained Model
MobileNetV3_small_x1_0 68.2 10.5 M Inference Model/Trained Model
MobileNetV3_small_x1_25 70.7 13.0 M Inference Model/Trained Model
MobileNetV4_conv_large 83.4 125.2 M Inference Model/Trained Model
MobileNetV4_conv_medium 80.9 37.6 M Inference Model/Trained Model
MobileNetV4_conv_small 74.4 14.7 M Inference Model/Trained Model
PP-HGNet_base 85.0 249.4 M Inference Model/Trained Model
PP-HGNet_small 81.51 86.5 M Inference Model/Trained Model
PP-HGNet_tiny 79.83 52.4 M Inference Model/Trained Model
PP-HGNetV2-B0 77.77 21.4 M Inference Model/Trained Model
PP-HGNetV2-B1 78.90 22.6 M Inference Model/Trained Model
PP-HGNetV2-B2 81.57 39.9 M Inference Model/Trained Model
PP-HGNetV2-B3 82.92 57.9 M Inference Model/Trained Model
PP-HGNetV2-B4 83.68 70.4 M Inference Model/Trained Model
PP-HGNetV2-B5 84.75 140.8 M Inference Model/Trained Model
PP-HGNetV2-B6 86.20 268.4 M Inference Model/Trained Model
PP-LCNet_x0_25 51.86 5.5 M Inference Model/Trained Model
PP-LCNet_x0_35 58.10 5.9 M Inference Model/Trained Model
PP-LCNet_x0_5 63.14 6.7 M Inference Model/Trained Model
PP-LCNet_x0_75 68.18 8.4 M Inference Model/Trained Model
PP-LCNet_x1_0 71.32 10.5 M Inference Model/Trained Model
PP-LCNet_x1_5 73.71 16.0 M Inference Model/Trained Model
PP-LCNet_x2_0 75.18 23.2 M Inference Model/Trained Model
PP-LCNet_x2_5 76.60 32.1 M Inference Model/Trained Model
PP-LCNetV2_base 77.04 23.7 M Inference Model/Trained Model
PP-LCNetV2_large 78.51 37.3 M Inference Model/Trained Model
PP-LCNetV2_small 73.96 14.6 M Inference Model/Trained Model
ResNet18_vd 72.3 41.5 M Inference Model/Trained Model
ResNet18 71.0 41.5 M Inference Model/Trained Model
ResNet34_vd 76.0 77.3 M Inference Model/Trained Model
ResNet34 74.6 77.3 M Inference Model/Trained Model
ResNet50_vd 79.1 90.8 M Inference Model/Trained Model
ResNet50 76.5 90.8 M Inference Model/Trained Model
ResNet101_vd 80.2 158.4 M Inference Model/Trained Model
ResNet101 77.6 158.7 M Inference Model/Trained Model
ResNet152_vd 80.6 214.3 M Inference Model/Trained Model
ResNet152 78.3 214.2 M Inference Model/Trained Model
ResNet200_vd 80.7 266.0 M Inference Model/Trained Model
StarNet-S1 73.5 11.2 M Inference Model/Trained Model
StarNet-S2 74.7 14.3 M Inference Model/Trained Model
StarNet-S3 77.4 22.2 M Inference Model/Trained Model
StarNet-S4 78.8 28.9 M Inference Model/Trained Model

Note: The above accuracy metrics refer to Top-1 Accuracy on the ImageNet-1k validation set.

Object Detection Module

Model Name mAP (%) Model Size (M) Model Download Link
FCOS-ResNet50 39.6 124.2 M Inference Model/Trained Model
PicoDet-L 42.5 20.9 M Inference Model/Trained Model
PicoDet-M 37.4 16.8 M Inference Model/Trained Model
PicoDet-S 29.0 4.4 M Inference Model/Trained Model
PicoDet-XS 26.2 5.7M Inference Model/Trained Model
PP-YOLOE_plus-L 52.8 185.3 M Inference Model/Trained Model
PP-YOLOE_plus-M 49.7 83.2 M Inference Model/Trained Model
PP-YOLOE_plus-S 43.6 28.3 M Inference Model/Trained Model
PP-YOLOE_plus-X 54.7 349.4 M Inference Model/Trained Model
RT-DETR-H 56.3 435.8 M Inference Model/Trained Model
RT-DETR-L 53.0 113.7 M Inference Model/Trained Model
RT-DETR-R18 46.5 70.7 M Inference Model/Trained Model
RT-DETR-R50 53.1 149.1 M Inference Model/Trained Model
RT-DETR-X 54.8 232.9 M Inference Model/Trained Model

Note: The above accuracy metrics are for COCO2017 validation set mAP(0.5:0.95).

Pedestrian Detection Module

Model Name mAP(%) Model Size (M) Model Download Link
PP-YOLOE-L_human 48.0 196.1 M Inference Model/Trained Model
PP-YOLOE-S_human 42.5 28.8 M Inference Model/Trained Model

Note: The above accuracy metrics are mAP(0.5:0.95) on the CrowdHuman validation set.

Text Detection Module

Model Name Detection Hmean (%) Model Size (M) Model Download Link
PP-OCRv4_mobile_det 77.79 4.2 M Inference Model/Trained Model
PP-OCRv4_server_det 82.69 100.1 M Inference Model/Trained Model

Note: The above accuracy metrics are evaluated on PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, and handwritten scenarios, with 500 images for detection.

Text Recognition Module

Model Name Recognition Avg Accuracy (%) Model Size (M) Model Download Link
PP-OCRv4_mobile_rec 78.20 10.6 M Inference Model/Trained Model
PP-OCRv4_server_rec 79.20 71.2 M Inference Model/Trained Model

Note: The above accuracy metrics are evaluated on PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, and handwritten scenarios, with 11,000 images for text recognition.

Comments