PaddleX Model List (Huawei Ascend NPU)¶
PaddleX incorporates multiple pipelines, each containing several modules, and each module encompasses various models. You can select the appropriate models based on the benchmark data below. If you prioritize model accuracy, choose models with higher accuracy. If you prioritize model size, select models with smaller storage requirements.
Image Classification Module¶
Model Name | Top-1 Accuracy (%) | Model Size (M) | Model Download Link |
---|---|---|---|
CLIP_vit_base_patch16_224 | 85.36 | 306.5 M | Inference Model/Trained Model |
CLIP_vit_large_patch14_224 | 88.1 | 1.04 G | Inference Model/Trained Model |
ConvNeXt_base_224 | 83.84 | 313.9 M | Inference Model/Trained Model |
ConvNeXt_base_384 | 84.90 | 313.9 M | Inference Model/Trained Model |
ConvNeXt_large_224 | 84.26 | 700.7 M | Inference Model/Trained Model |
ConvNeXt_large_384 | 85.27 | 700.7 M | Inference Model/Trained Model |
ConvNeXt_small | 83.13 | 178.0 M | Inference Model/Trained Model |
ConvNeXt_tiny | 82.03 | 101.4 M | Inference Model/Trained Model |
MobileNetV1_x0_5 | 63.5 | 4.8 M | Inference Model/Trained Model |
MobileNetV1_x0_25 | 51.4 | 1.8 M | Inference Model/Trained Model |
MobileNetV1_x0_75 | 68.8 | 9.3 M | Inference Model/Trained Model |
MobileNetV1_x1_0 | 71.0 | 15.2 M | Inference Model/Trained Model |
MobileNetV2_x0_5 | 65.0 | 7.1 M | Inference Model/Trained Model |
MobileNetV2_x0_25 | 53.2 | 5.5 M | Inference Model/Trained Model |
MobileNetV2_x1_0 | 72.2 | 12.6 M | Inference Model/Trained Model |
MobileNetV2_x1_5 | 74.1 | 25.0 M | Inference Model/Trained Model |
MobileNetV2_x2_0 | 75.2 | 41.2 M | Inference Model/Trained Model |
MobileNetV3_large_x0_5 | 69.2 | 9.6 M | Inference Model/Trained Model |
MobileNetV3_large_x0_35 | 64.3 | 7.5 M | Inference Model/Trained Model |
MobileNetV3_large_x0_75 | 73.1 | 14.0 M | Inference Model/Trained Model |
MobileNetV3_large_x1_0 | 75.3 | 19.5 M | Inference Model/Trained Model |
MobileNetV3_large_x1_25 | 76.4 | 26.5 M | Inference Model/Trained Model |
MobileNetV3_small_x0_5 | 59.2 | 6.8 M | Inference Model/Trained Model |
MobileNetV3_small_x0_35 | 53.0 | 6.0 M | Inference Model/Trained Model |
MobileNetV3_small_x0_75 | 66.0 | 8.5 M | Inference Model/Trained Model |
MobileNetV3_small_x1_0 | 68.2 | 10.5 M | Inference Model/Trained Model |
MobileNetV3_small_x1_25 | 70.7 | 13.0 M | Inference Model/Trained Model |
MobileNetV4_conv_large | 83.4 | 125.2 M | Inference Model/Trained Model |
MobileNetV4_conv_medium | 79.9 | 37.6 M | Inference Model/Trained Model |
MobileNetV4_conv_small | 74.6 | 14.7 M | Inference Model/Trained Model |
MobileNetV4_hybrid_large | 83.8 | 145.1 M | Inference Model/Trained Model |
MobileNetV4_hybrid_medium | 80.5 | 42.9 M | Inference Model/Trained Model |
PP-HGNet_base | 85.0 | 249.4 M | Inference Model/Trained Model |
PP-HGNet_small | 81.51 | 86.5 M | Inference Model/Trained Model |
PP-HGNet_tiny | 79.83 | 52.4 M | Inference Model/Trained Model |
PP-HGNetV2-B0 | 77.77 | 21.4 M | Inference Model/Trained Model |
PP-HGNetV2-B1 | 79.18 | 22.6 M | Inference Model/Trained Model |
PP-HGNetV2-B2 | 81.74 | 39.9 M | Inference Model/Trained Model |
PP-HGNetV2-B3 | 82.98 | 57.9 M | Inference Model/Trained Model |
PP-HGNetV2-B4 | 83.57 | 70.4 M | Inference Model/Trained Model |
PP-HGNetV2-B5 | 84.75 | 140.8 M | Inference Model/Trained Model |
PP-HGNetV2-B6 | 86.30 | 268.4 M | Inference Model/Trained Model |
PP-LCNet_x0_5 | 63.14 | 6.7 M | Inference Model/Trained Model |
PP-LCNet_x0_25 | 51.86 | 5.5 M | Inference Model/Trained Model |
PP-LCNet_x0_35 | 58.09 | 5.9 M | Inference Model/Trained Model |
PP-LCNet_x0_75 | 68.18 | 8.4 M | Inference Model/Trained Model |
PP-LCNet_x1_0 | 71.32 | 10.5 M | Inference Model/Trained Model |
PP-LCNet_x1_5 | 73.71 | 16.0 M | Inference Model/Trained Model |
PP-LCNet_x2_0 | 75.18 | 23.2 M | Inference Model/Trained Model |
PP-LCNet_x2_5 | 76.60 | 32.1 M | Inference Model/Trained Model |
PP-LCNetV2_base | 77.05 | 23.7 M | Inference Model/Trained Model |
PP-LCNetV2_large | 78.51 | 37.3 M | Inference Model/Trained Model |
PP-LCNetV2_small | 73.97 | 14.6 M | Inference Model/Trained Model |
ResNet18_vd | 72.3 | 41.5 M | Inference Model/Trained Model |
ResNet18 | 71.0 | 41.5 M | Inference Model/Trained Model |
ResNet34_vd | 76.0 | 77.3 M | Inference Model/Trained Model |
ResNet34 | 74.6 | 77.3 M | Inference Model/Trained Model |
ResNet50_vd | 79.1 | 90.8 M | Inference Model/Trained Model |
ResNet50 | 76.5 | 90.8 M | Inference Model/Trained Model |
ResNet101_vd | 80.2 | 158.4 M | Inference Model/Trained Model |
ResNet101 | 77.6 | 158.7 M | Inference Model/Trained Model |
ResNet152_vd | 80.6 | 214.3 M | Inference Model/Trained Model |
ResNet152 | 78.3 | 214.2 M | Inference Model/Trained Model |
ResNet200_vd | 80.9 | 266.0 M | Inference Model/Trained Model |
SwinTransformer_base_patch4_window7_224 | 83.37 | 310.5 M | Inference Model/Trained Model |
SwinTransformer_base_patch4_window12_384 | 84.17 | 311.4 M | Inference Model/Trained Model |
SwinTransformer_large_patch4_window7_224 | 86.19 | 694.8 M | Inference Model/Trained Model |
SwinTransformer_large_patch4_window12_384 | 87.06 | 696.1 M | Inference Model/Trained Model |
SwinTransformer_small_patch4_window7_224 | 83.21 | 175.6 M | Inference Model/Trained Model |
SwinTransformer_tiny_patch4_window7_224 | 81.10 | 100.1 M | Inference Model/Trained Model |
StarNet-S1 | 73.6 | 11.2 M | Inference Model/Trained Model |
StarNet-S2 | 74.8 | 14.3 M | Inference Model/Trained Model |
StarNet-S3 | 77.0 | 22.2 M | Inference Model/Trained Model |
StarNet-S4 | 79.0 | 28.9 M | Inference Model/Trained Model |
FasterNet-L | 83.5 | 357.1 M | Inference Model/Trained Model |
FasterNet-M | 83.0 | 204.6 M | Inference Model/Trained Model |
FasterNet-S | 81.3 | 119.3 M | Inference Model/Trained Model |
FasterNet-T0 | 71.9 | 15.1 M | Inference Model/Trained Model |
FasterNet-T1 | 75.9 | 29.2 M | Inference Model/Trained Model |
FasterNet-T2 | 79.1 | 57.4 M | Inference Model/Trained Model |
Note: The above accuracy metrics refer to Top-1 Accuracy on the ImageNet-1k validation set.
Image Multi-label Classification Module¶
Model Name | mAP (%) | Model Storage Size | Model Download Link |
---|---|---|---|
CLIP_vit_base_patch16_448_ML | 89.15 | 325.6 M | Inference Model/Training Model |
PP-HGNetV2-B0_ML | 80.98 | 39.6 M | Inference Model/Training Model |
PP-HGNetV2-B4_ML | 87.96 | 88.5 M | Inference Model/Training Model |
PP-HGNetV2-B6_ML | 91.06 | 286.5 M | Inference Model/Training Model |
PP-LCNet_x1_0_ML | 77.96 | 29.4 M | Inference Model/Trained Model |
ResNet50_ML | 83.42 | 108.9 M | Inference Model/Trained Model |
Note: The above accuracy metrics are for the multi-label classification task mAP of COCO2017.
Pedestrian Attribute Module¶
Model Name | mA (%) | Model Size | Model Download Link |
---|---|---|---|
PP-LCNet_x1_0_pedestrian_attribute | 92.2 | 6.7 M | Inference Model/Trained Model |
Note: The above accuracy metrics are mA on PaddleX's internal self-built dataset.
Vehicle Attribute Module¶
Model Name | mA (%) | Model Size | Model Download Link |
---|---|---|---|
PP-LCNet_x1_0_vehicle_attribute | 91.7 | 6.7 M | Inference Model/Trained Model |
Note: The above accuracy metrics are mA on the VeRi dataset.
Object Detection Module¶
Model Name | mAP (%) | Model Size (M) | Model Download Link |
---|---|---|---|
Cascade-FasterRCNN-ResNet50-FPN | 41.1 | 245.4 M | Inference Model/Trained Model |
Cascade-FasterRCNN-ResNet50-vd-SSLDv2-FPN | 45.0 | 246.2 M | Inference Model/Trained Model |
CenterNet-DLA-34 | 37.6 | 75.4 M | Inference Model/Trained Model |
CenterNet-ResNet50 | 38.9 | 319.7 M | Inference Model/Trained Model |
DETR-R50 | 42.3 | 159.3 M | Inference Model/Trained Model |
FasterRCNN-ResNet34-FPN | 37.8 | 137.5 M | Inference Model/Trained Model |
FasterRCNN-ResNet50 | 36.7 | 120.2 M | Inference Model/Trained Model |
FasterRCNN-ResNet50-FPN | 38.4 | 148.1 M | Inference Model/Trained Model |
FasterRCNN-ResNet50-vd-FPN | 39.5 | 148.1 M | Inference Model/Trained Model |
FasterRCNN-ResNet50-vd-SSLDv2-FPN | 41.4 | 148.1 M | Inference Model/Trained Model |
FasterRCNN-ResNet101 | 39.0 | 188.1 M | Inference Model/Trained Model |
FasterRCNN-ResNet101-FPN | 41.4 | 216.3 M | Inference Model/Trained Model |
FasterRCNN-ResNeXt101-vd-FPN | 43.4 | 360.6 M | Inference Model/Trained Model |
FasterRCNN-Swin-Tiny-FPN | 42.6 | 159.8 M | Inference Model/Trained Model |
FCOS-ResNet50 | 39.6 | 124.2 M | Inference Model/Trained Model |
PicoDet-L | 42.6 | 20.9 M | Inference Model/Trained Model |
PicoDet-M | 37.5 | 16.8 M | Inference Model/Trained Model |
PicoDet-S | 29.1 | 4.4 M | Inference Model/Trained Model |
PicoDet-XS | 26.2 | 5.7M | Inference Model/Trained Model |
PP-YOLOE_plus-L | 52.9 | 185.3 M | Inference Model/Trained Model |
PP-YOLOE_plus-M | 49.8 | 83.2 M | Inference Model/Trained Model |
PP-YOLOE_plus-S | 43.7 | 28.3 M | Inference Model/Trained Model |
PP-YOLOE_plus-X | 54.7 | 349.4 M | Inference Model/Trained Model |
RT-DETR-H | 56.3 | 435.8 M | Inference Model/Trained Model |
RT-DETR-L | 53.0 | 113.7 M | Inference Model/Trained Model |
RT-DETR-R18 | 46.5 | 70.7 M | Inference Model/Trained Model |
RT-DETR-R50 | 53.1 | 149.1 M | Inference Model/Trained Model |
RT-DETR-X | 54.8 | 232.9 M | Inference Model/Trained Model |
YOLOv3-DarkNet53 | 39.1 | 219.7 M | Inference Model/Trained Model |
YOLOv3-MobileNetV3 | 31.4 | 83.8 M | Inference Model/Trained Model |
YOLOv3-ResNet50_vd_DCN | 40.6 | 163.0 M | Inference Model/Trained Model |
Note: The above accuracy metrics are for COCO2017 validation set mAP(0.5:0.95).
Small Object Detection Module¶
Model Name | mAP(%) | Model Size | Model Download Link |
---|---|---|---|
PP-YOLOE_plus_SOD-S | 25.1 | 77.3 M | Inference Model/Trained Model |
PP-YOLOE_plus_SOD-L | 31.9 | 325.0 M | Inference Model/Trained Model |
PP-YOLOE_plus_SOD-largesize-L | 42.7 | 340.5 M | Inference Model/Trained Model |
YOLOX-S | 40.4 | 32.0 M | Inference Model/Trained Model |
YOLOX-T | 32.9 | 18.1 M | Inference Model/Trained Model |
YOLOX-M | 46.9 | 90.0 M | Inference Model/Trained Model |
YOLOX-N | 26.1 | 3.4M | 推理模型/训练模型 |
Note: The above accuracy metrics are for VisDrone-DET validation set mAP(0.5:0.95)。
Pedestrian Detection Module¶
Model Name | mAP(%) | Model Size | Model Download Link |
---|---|---|---|
PP-YOLOE-L_human | 48.0 | 196.1 M | Inference Model/Trained Model |
PP-YOLOE-S_human | 42.5 | 28.8 M | Inference Model/Trained Model |
Note: The above accuracy metrics are for CrowdHuman validation set mAP(0.5:0.95)。
Semantic Segmentation Module¶
Model Name | mIoU (%) | Model Size (M) | Model Download Link |
---|---|---|---|
Deeplabv3_Plus-R50 | 80.36 | 94.9 M | Inference Model/Trained Model |
Deeplabv3_Plus-R101 | 81.10 | 162.5 M | Inference Model/Trained Model |
Deeplabv3-R50 | 79.90 | 138.3 M | Inference Model/Trained Model |
Deeplabv3-R101 | 80.85 | 205.9 M | Inference Model/Trained Model |
OCRNet_HRNet-W18 | 80.67 | 43.1 M | Inference Model/Trained Model |
OCRNet_HRNet-W48 | 82.15 | 249.8 M | Inference Model/Trained Model |
PP-LiteSeg-T | 73.10 | 28.5 M | Inference Model/Trained Model |
SegFormer-B0 (slice) | 76.73 | 13.2 M | Inference Model/Trained Model |
SegFormer-B1 (slice) | 78.35 | 48.5 M | Inference Model/Trained Model |
SegFormer-B2 (slice) | 81.60 | 96.9 M | Inference Model/Trained Model |
SegFormer-B3 (slice) | 82.47 | 167.3 M | Inference Model/Trained Model |
SegFormer-B4 (slice) | 82.38 | 226.7 M | Inference Model/Trained Model |
SegFormer-B5 (slice) | 82.58 | 229.7 M | Inference Model/Trained Model |
Note: The above accuracy metrics are for Cityscapes dataset mIoU.
Model Name | mIoU (%) | Model Size | Model Download Link |
---|---|---|---|
SeaFormer_base(slice) | 40.92 | 30.8 M | Inference Model/Trained Model |
SeaFormer_large (slice) | 43.66 | 49.8 M | Inference Model/Trained Model |
SeaFormer_small (slice) | 38.73 | 14.3 M | Inference Model/Trained Model |
SeaFormer_tiny (slice) | 34.58 | 6.1 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the ADE20k dataset. "slice" indicates that the input image has been cropped.
Instance Segmentation Module¶
Model Name | Mask AP | Model Size (M) | Model Download Link |
---|---|---|---|
Mask-RT-DETR-H | 50.6 | 449.9 M | Inference Model/Trained Model |
Mask-RT-DETR-L | 45.7 | 113.6 M | Inference Model/Trained Model |
Mask-RT-DETR-M | 42.7 | 66.6 M | Inference Model/Trained Model |
Mask-RT-DETR-S | 41.0 | 51.8 M | Inference Model/Trained Model |
Mask-RT-DETR-X | 47.5 | 237.5 M | Inference Model/Trained Model |
Cascade-MaskRCNN-ResNet50-FPN | 36.3 | 254.8 M | Inference Model/Trained Model |
Cascade-MaskRCNN-ResNet50-vd-SSLDv2-FPN | 39.1 | 254.7 M | Inference Model/Trained Model |
MaskRCNN-ResNet50-FPN | 35.6 | 157.5 M | Inference Model/Trained Model |
MaskRCNN-ResNet50-vd-FPN | 36.4 | 157.5 M | Inference Model/Trained Model |
MaskRCNN-ResNet50 | 32.8 | 127.8 M | Inference Model/Trained Model |
MaskRCNN-ResNet101-FPN | 36.6 | 225.4 M | Inference Model/Trained Model |
MaskRCNN-ResNet101-vd-FPN | 38.1 | 225.1 M | Inference Model/Trained Model |
MaskRCNN-ResNeXt101-vd-FPN | 39.5 | 370.0 M | Inference Model/Trained Model |
PP-YOLOE_seg-S | 32.5 | 31.5 M | Inference Model/Trained Model |
Note: The above accuracy metrics are for COCO2017 validation set Mask AP(0.5:0.95).
Image Feature Module¶
Model Name | recall@1(%) | Model Size | Model Download Link |
---|---|---|---|
PP-ShiTuV2_rec | 84.2 | 16.3 M | Inference Model/Trained Model |
PP-ShiTuV2_rec_CLIP_vit_base | 88.69 | 306.6 M | Inference Model/Trained Model |
PP-ShiTuV2_rec_CLIP_vit_large | 91.03 | 1.05 G | Inference Model/Trained Model |
Note: The above accuracy metrics are for AliProducts recall@1。
Main Body Detection Module¶
Model Name | mAP(%) | Model Size | Model Download Link |
---|---|---|---|
PP-ShiTuV2_det | 41.5 | 27.6 M | Inference Model/Trained Model |
Note: The above accuracy metrics are for PaddleClas主体检测数据集 mAP(0.5:0.95)。
Vehicle Detection Module¶
Model Name | mAP(%) | Model Size | Model Download Link |
---|---|---|---|
PP-YOLOE-L_vehicle | 63.9 | 196.1 M | Inference Model/Trained Model |
PP-YOLOE-S_vehicle | 61.3 | 28.8 M | Inference Model/Trained Model |
Note: The above accuracy metrics are for PPVehicle validation set mAP(0.5:0.95)。
Face Detection Module¶
Model Name | AP (%) Easy/Medium/Hard |
Model Size | Model Download Link |
---|---|---|---|
PicoDet_LCNet_x2_5_face | 93.7/90.7/68.1 | 28.9 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the WIDER-FACE validation set with an input size of 640*640.
Abnormality Detection Module¶
Model Name | Avg(%) | Model Size | Model Download Link |
---|---|---|---|
STFPM | 96.2 | 21.5 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the MVTec AD dataset using the average anomaly score.
Text Detection Module¶
Model Name | Detection Hmean (%) | Model Size (M) | Model Download Link |
---|---|---|---|
PP-OCRv4_mobile_det | 77.79 | 4.2 M | Inference Model/Trained Model |
PP-OCRv4_server_det | 82.69 | 100.1 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, and handwritten scenarios, with 500 images for detection.
Text Recognition Module¶
Model Name | Recognition Avg Accuracy (%) | Model Size (M) | Model Download Link |
---|---|---|---|
PP-OCRv4_mobile_rec | 78.20 | 10.6 M | Inference Model/Trained Model |
PP-OCRv4_server_rec | 79.20 | 71.2 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on PaddleOCR's self-built Chinese dataset, covering street scenes, web images, documents, and handwritten scenarios, with 11,000 images for text recognition.
Model Name | Recognition Avg Accuracy (%) | Model Size (M) | Model Download Link |
---|---|---|---|
ch_SVTRv2_rec | 68.81 | 73.9 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition A-Rank.
Model Name | Recognition Avg Accuracy (%) | Model Size (M) | Model Download Link |
---|---|---|---|
ch_RepSVTR_rec | 65.07 | 22.1 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on the PaddleOCR Algorithm Model Challenge - Task 1: OCR End-to-End Recognition B-Rank.
Table Structure Recognition Module¶
Model Name | Accuracy (%) | Model Size (M) | Model Download Link |
---|---|---|---|
SLANet | 76.31 | 6.9 M | Inference Model/Trained Model |
SLANet_plus | 63.69 | 6.9 M | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the PubtabNet English table recognition dataset.
Image Rectification Module¶
Model Name | MS-SSIM (%) | Model Size | Model Download Link |
---|---|---|---|
UVDoc | 54.40 | 30.3 M | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on a self-built image rectification dataset by PaddleX.
Seal Text Detection Module¶
Model Name | Detection Hmean (%) | Model Size | Model Download Link |
---|---|---|---|
PP-OCRv4_mobile_seal_det | 96.47 | 4.7 M | Inference Model/Trained Model |
PP-OCRv4_server_seal_det | 98.21 | 108.3 M | Inference Model/Trained Model |
Note: The above accuracy metrics are evaluated on a self-built seal dataset by PaddleX, containing 500 seal images.
Document Orientation Classification Module¶
Model Name | Top-1 Acc (%) | Model Size | Model Download Link |
---|---|---|---|
PP-LCNet_x1_0_doc_ori | 99.26 | 7.1 M | Inference Model/Trained Model |
Note: The above accuracy metrics are Top-1 Acc on PaddleX's internal self-built dataset.
Layout Detection Module¶
Model Name | mAP (%) | Model Size (M) | Model Download Link |
---|---|---|---|
PicoDet_layout_1x | 86.8 | 7.4M | Inference Model/Trained Model |
PicoDet-L_layout_3cls | 89.3 | 22.6 M | Inference Model/Trained Model |
RT-DETR-H_layout_3cls | 95.9 | 470.1 M | Inference Model/Trained Model |
RT-DETR-H_layout_17cls | 92.6 | 470.2 M | Inference Model/Trained Model |
Note: The evaluation set for the above accuracy metrics is PaddleOCR's self-built layout analysis dataset, containing 10,000 images.
Time Series Forecasting Module¶
Model Name | MSE | MAE | Model Size (M) | Model Download Link |
---|---|---|---|---|
DLinear | 0.382 | 0.394 | 72K | Inference Model/Trained Model |
NLinear | 0.386 | 0.392 | 40K | Inference Model/Trained Model |
Nonstationary | 0.600 | 0.515 | 55.5 M | Inference Model/Trained Model |
PatchTST | 0.385 | 0.397 | 2.0M | Inference Model/Trained Model |
RLinear | 0.384 | 0.392 | 40K | Inference Model/Trained Model |
TiDE | 0.405 | 0.412 | 31.7M | Inference Model/Trained Model |
TimesNet | 0.417 | 0.431 | 4.9M | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the ETTH1 dataset (evaluation results on the test set test.csv).
Time Series Anomaly Detection Module¶
Model Name | Precision | Recall | F1-Score | Model Size (M) | Model Download Link |
---|---|---|---|---|---|
AutoEncoder_ad | 99.36 | 84.36 | 91.25 | 52K | Inference Model/Trained Model |
DLinear_ad | 98.98 | 93.96 | 96.41 | 112K | Inference Model/Trained Model |
Nonstationary_ad | 98.55 | 88.95 | 93.51 | 1.8M | Inference Model/Trained Model |
PatchTST_ad | 98.78 | 90.70 | 94.57 | 320K | Inference Model/Trained Model |
TimesNet_ad | 98.37 | 94.80 | 96.56 | 1.3M | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the PSM dataset.
Time Series Classification Module¶
Model Name | Acc (%) | Model Size (M) | Model Download Link |
---|---|---|---|
TimesNet_cls | 87.5 | 792K | Inference Model/Trained Model |
Note: The above accuracy metrics are measured on the UWaveGestureLibrary: Training, Evaluation datasets.