Skip to content

General Image Classification Pipeline Tutorial

1. Introduction to the General Image Classification Pipeline

Image classification is a technique that assigns images to predefined categories. It is widely applied in object recognition, scene understanding, and automatic annotation. Image classification can identify various objects such as animals, plants, traffic signs, and categorize them based on their features. By leveraging deep learning models, image classification can automatically extract image features and perform accurate classification.

The General Image Classification Pipeline includes an image classification module. If you prioritize model accuracy, choose a model with higher accuracy. If you prioritize inference speed, select a model with faster inference. If you prioritize model storage size, choose a model with a smaller storage size.

ModelModel Download Link Top1 Acc(%) GPU Inference Time (ms) CPU Inference Time (ms) Model Storage Size (M)
CLIP_vit_base_patch16_224Inference Model/Trained Model 85.36 13.1957 285.493 306.5 M
MobileNetV3_small_x1_0Inference Model/Trained Model 68.2 6.00993 12.9598 10.5 M
PP-HGNet_smallInference Model/Trained Model 81.51 5.50661 119.041 86.5 M
PP-HGNetV2-B0Inference Model/Trained Model 77.77 6.53694 23.352 21.4 M
PP-HGNetV2-B4Inference Model/Trained Model 83.57 9.66407 54.2462 70.4 M
PP-HGNetV2-B6Inference Model/Trained Model 86.30 21.226 255.279 268.4 M
PP-LCNet_x1_0Inference Model/Trained Model 71.32 3.84845 9.23735 10.5 M
ResNet50Inference Model/Trained Model 76.5 9.62383 64.8135 90.8 M
SwinTransformer_tiny_patch4_window7_224Inference Model/Trained Model 81.10 8.54846 156.306 100.1 M

❗ The above list features the 9 core models that the image classification module primarily supports. In total, this module supports 80 models. The complete list of models is as follows:

👉Details of Model List
ModelModel Download Link Top-1 Accuracy (%) GPU Inference Time (ms) CPU Inference Time (ms) Model Size (M) Description
CLIP_vit_base_patch16_224Inference Model/Trained Model 85.36 13.1957 285.493 306.5 M CLIP is an image classification model based on the correlation between vision and language. It adopts contrastive learning and pre-training methods to achieve unsupervised or weakly supervised image classification, especially suitable for large-scale datasets. By mapping images and texts into the same representation space, the model learns general features, exhibiting good generalization ability and interpretability. With relatively good training errors, it performs well in many downstream tasks.
CLIP_vit_large_patch14_224Inference Model/Trained Model 88.1 51.1284 1131.28 1.04 G
ConvNeXt_base_224Inference Model/Trained Model 83.84 12.8473 1513.87 313.9 M The ConvNeXt series of models were proposed by Meta in 2022, based on the CNN architecture. This series of models builds upon ResNet, incorporating the advantages of SwinTransformer, including training strategies and network structure optimization ideas, to improve the pure CNN architecture network. It explores the performance limits of convolutional neural networks. The ConvNeXt series of models possesses many advantages of convolutional neural networks, including high inference efficiency and ease of migration to downstream tasks.
ConvNeXt_base_384Inference Model/Trained Model 84.90 31.7607 3967.05 313.9 M
ConvNeXt_large_224Inference Model/Trained Model 84.26 26.8103 2463.56 700.7 M
ConvNeXt_large_384Inference Model/Trained Model 85.27 66.4058 6598.92 700.7 M
ConvNeXt_smallInference Model/Trained Model 83.13 9.74075 1127.6 178.0 M
ConvNeXt_tinyInference Model/Trained Model 82.03 5.48923 672.559 104.1 M
FasterNet-LInference Model/Trained Model 83.5 23.4415 - 357.1 M FasterNet is a neural network designed to improve runtime speed. Its key improvements are as follows:
1. Re-examined popular operators and found that low FLOPS mainly stem from frequent memory accesses, especially in depthwise convolutions;
2. Proposed Partial Convolution (PConv) to extract image features more efficiently by reducing redundant computations and memory accesses;
3. Launched the FasterNet series of models based on PConv, a new design scheme that achieves significantly higher runtime speeds on various devices without compromising model task performance.
FasterNet-MInference Model/Trained Model 83.0 21.8936 - 204.6 M
FasterNet-SInference Model/Trained Model 81.3 13.0409 - 119.3 M
FasterNet-T0Inference Model/Trained Model 71.9 12.2432 - 15.1 M
FasterNet-T1Inference Model/Trained Model 75.9 11.3562 - 29.2 M
FasterNet-T2Inference Model/Trained Model 79.1 10.703 - 57.4 M
MobileNetV1_x0_5Inference Model/Trained Model 63.5 1.86754 7.48297 4.8 M MobileNetV1 is a network released by Google in 2017 for mobile devices or embedded devices. This network decomposes traditional convolution operations into depthwise separable convolutions, which are a combination of Depthwise convolution and Pointwise convolution. Compared to traditional convolutional networks, this combination can significantly reduce the number of parameters and computations. Additionally, this network can be used for image classification and other vision tasks.
MobileNetV1_x0_25Inference Model/Trained Model 51.4 1.83478 4.83674 1.8 M
MobileNetV1_x0_75Inference Model/Trained Model 68.8 2.57903 10.6343 9.3 M
MobileNetV1_x1_0Inference Model/Trained Model 71.0 2.78781 13.98 15.2 M
MobileNetV2_x0_5Inference Model/Trained Model 65.0 4.94234 11.1629 7.1 M MobileNetV2 is a lightweight network proposed by Google following MobileNetV1. Compared to MobileNetV1, MobileNetV2 introduces Linear bottlenecks and Inverted residual blocks as the basic structure of the network. By stacking these basic modules extensively, the network structure of MobileNetV2 is formed. Finally, it achieves higher classification accuracy with only half the FLOPs of MobileNetV1.
MobileNetV2_x0_25Inference Model/Trained Model 53.2 4.50856 9.40991 5.5 M
MobileNetV2_x1_0Inference Model/Trained Model 72.2 6.12159 16.0442 12.6 M
MobileNetV2_x1_5Inference Model/Trained Model 74.1 6.28385 22.5129 25.0 M
MobileNetV2_x2_0Inference Model/Trained Model 75.2 6.12888 30.8612 41.2 M
MobileNetV3_large_x0_5Inference Model/Trained Model 69.2 6.31302 14.5588 9.6 M MobileNetV3 is a NAS-based lightweight network proposed by Google in 2019. To further enhance performance, relu and sigmoid activation functions are replaced with hard_swish and hard_sigmoid activation functions, respectively. Additionally, some improvement strategies specifically designed to reduce network computations are introduced.
MobileNetV3_large_x0_35Inference Model/Trained Model 64.3 5.76207 13.9041 7.5 M
MobileNetV3_large_x0_75Inference Model/Trained Model 73.1 8.41737 16.9506 14.0 M
MobileNetV3_large_x1_0Inference Model/Trained Model 75.3 8.64112 19.1614 19.5 M
MobileNetV3_large_x1_25Inference Model/Trained Model 76.4 8.73358 22.1296 26.5 M
MobileNetV3_small_x0_5Inference Model/Trained Model 59.2 5.16721 11.2688 6.8 M
MobileNetV3_small_x0_35Inference Model/Trained Model 53.0 5.22053 11.0055 6.0 M
MobileNetV3_small_x0_75Inference Model/Trained Model 66.0 5.39831 12.8313 8.5 M
MobileNetV3_small_x1_0Inference Model/Trained Model 68.2 6.00993 12.9598 10.5 M
MobileNetV3_small_x1_25Inference Model/Trained Model 70.7 6.9589 14.3995 13.0 M
MobileNetV4_conv_largeInference Model/Trained Model 83.4 12.5485 51.6453 125.2 M MobileNetV4 is an efficient architecture specifically designed for mobile devices. Its core lies in the introduction of the UIB (Universal Inverted Bottleneck) module, a unified and flexible structure that integrates IB (Inverted Bottleneck), ConvNeXt, FFN (Feed Forward Network), and the latest ExtraDW (Extra Depthwise) module. Alongside UIB, Mobile MQA, a customized attention block for mobile accelerators, was also introduced, achieving up to 39% significant acceleration. Furthermore, MobileNetV4 introduces a novel Neural Architecture Search (NAS) scheme to enhance the effectiveness of the search process.
MobileNetV4_conv_mediumInference Model/Trained Model 79.9 9.65509 26.6157 37.6 M
MobileNetV4_conv_smallInference Model/Trained Model 74.6 5.24172 11.0893 14.7 M
MobileNetV4_hybrid_largeInference Model/Trained Model 83.8 20.0726 213.769 145.1 M
MobileNetV4_hybrid_mediumInference Model/Trained Model 80.5 19.7543 62.2624 42.9 M
PP-HGNet_baseInference Model/Trained Model 85.0 14.2969 327.114 249.4 M PP-HGNet (High Performance GPU Net) is a high-performance backbone network developed by Baidu PaddlePaddle's vision team, tailored for GPU platforms. This network combines the fundamentals of VOVNet with learnable downsampling layers (LDS Layer), incorporating the advantages of models such as ResNet_vd and PPHGNet. On GPU platforms, this model achieves higher accuracy compared to other SOTA models at the same speed. Specifically, it outperforms ResNet34-0 by 3.8 percentage points and ResNet50-0 by 2.4 percentage points. Under the same SLSD conditions, it ultimately surpasses ResNet50-D by 4.7 percentage points. Additionally, at the same level of accuracy, its inference speed significantly exceeds that of mainstream Vision Transformers.
PP-HGNet_smallInference Model/Trained Model 81.51 5.50661 119.041 86.5 M
PP-HGNet_tinyInference Model/Trained Model 79.83 5.22006 69.396 52.4 M
PP-HGNetV2-B0Inference Model/Trained Model 77.77 6.53694 23.352 21.4 M PP-HGNetV2 (High Performance GPU Network V2) is the next-generation version of Baidu PaddlePaddle's PP-HGNet, featuring further optimizations and improvements upon its predecessor. It pushes the limits of NVIDIA's "Accuracy-Latency Balance," significantly outperforming other models with similar inference speeds in terms of accuracy. It demonstrates strong performance across various label classification and evaluation scenarios.
PP-HGNetV2-B1Inference Model/Trained Model 79.18 6.56034 27.3099 22.6 M
PP-HGNetV2-B2Inference Model/Trained Model 81.74 9.60494 43.1219 39.9 M
PP-HGNetV2-B3Inference Model/Trained Model 82.98 11.0042 55.1367 57.9 M
PP-HGNetV2-B4Inference Model/Trained Model 83.57 9.66407 54.2462 70.4 M
PP-HGNetV2-B5Inference Model/Trained Model 84.75 15.7091 115.926 140.8 M
PP-HGNetV2-B6Inference Model/Trained Model 86.30 21.226 255.279 268.4 M
PP-LCNet_x0_5Inference Model/Trained Model 63.14 3.67722 6.66857 6.7 M PP-LCNet is a lightweight backbone network developed by Baidu PaddlePaddle's vision team. It enhances model performance without increasing inference time, significantly surpassing other lightweight SOTA models.
PP-LCNet_x0_25Inference Model/Trained Model 51.86 2.65341 5.81357 5.5 M
PP-LCNet_x0_35Inference Model/Trained Model 58.09 2.7212 6.28944 5.9 M
PP-LCNet_x0_75Inference Model/Trained Model 68.18 3.91032 8.06953 8.4 M
PP-LCNet_x1_0Inference Model/Trained Model 71.32 3.84845 9.23735 10.5 M
PP-LCNet_x1_5Inference Model/Trained Model 73.71 3.97666 12.3457 16.0 M
PP-LCNet_x2_0Inference Model/Trained Model 75.18 4.07556 16.2752 23.2 M
PP-LCNet_x2_5Inference Model/Trained Model 76.60 4.06028 21.5063 32.1 M
PP-LCNetV2_baseInference Model/Trained Model 77.05 5.23428 19.6005 23.7 M The PP-LCNetV2 image classification model is the next-generation version of PP-LCNet, self-developed by Baidu PaddlePaddle's vision team. Based on PP-LCNet, it has undergone further optimization and improvements, primarily utilizing re-parameterization strategies to combine depthwise convolutions with varying kernel sizes and optimizing pointwise convolutions, Shortcuts, etc. Without using additional data, the PPLCNetV2_base model achieves over 77% Top-1 Accuracy on the ImageNet dataset for image classification, while maintaining an inference time of less than 4.4 ms on Intel CPU platforms.
PP-LCNetV2_large Inference Model/Trained Model 78.51 6.78335 30.4378 37.3 M
PP-LCNetV2_smallInference Model/Trained Model 73.97 3.89762 13.0273 14.6 M
ResNet18_vdInference Model/Trained Model 72.3 3.53048 31.3014 41.5 M The ResNet series of models were introduced in 2015, winning the ILSVRC2015 competition with a top-5 error rate of 3.57%. This network innovatively proposed residual structures, which are stacked to construct the ResNet network. Experiments have shown that using residual blocks can effectively improve convergence speed and accuracy.
ResNet18 Inference Model/Trained Model 71.0 2.4868 27.4601 41.5 M
ResNet34_vdInference Model/Trained Model 76.0 5.60675 56.0653 77.3 M
ResNet34Inference Model/Trained Model 74.6 4.16902 51.925 77.3 M
ResNet50_vdInference Model/Trained Model 79.1 10.1885 68.446 90.8 M
ResNet50Inference Model/Trained Model 76.5 9.62383 64.8135 90.8 M
ResNet101_vdInference Model/Trained Model 80.2 20.0563 124.85 158.4 M
ResNet101Inference Model/Trained Model 77.6 19.2297 121.006 158.4 M
ResNet152_vdInference Model/Trained Model 80.6 29.6439 181.678 214.3 M
ResNet152Inference Model/Trained Model 78.3 30.0461 177.707 214.2 M
ResNet200_vdInference Model/Trained Model 80.9 39.1628 235.185 266.0 M
StarNet-S1Inference Model/Trained Model 73.6 9.895 23.0465 11.2 M StarNet focuses on exploring the untapped potential of "star operations" (i.e., element-wise multiplication) in network design. It reveals that star operations can map inputs to high-dimensional, nonlinear feature spaces, a process akin to kernel tricks but without the need to expand the network size. Consequently, StarNet, a simple yet powerful prototype network, is further proposed, demonstrating exceptional performance and low latency under compact network structures and limited computational resources.
StarNet-S2 Inference Model/Trained Model 74.8 7.91279 21.9571 14.3 M
StarNet-S3Inference Model/Trained Model 77.0 10.7531 30.7656 22.2 M
StarNet-S4Inference Model/Trained Model 79.0 15.2868 43.2497 28.9 M
SwinTransformer_base_patch4_window7_224Inference Model/Trained Model 83.37 16.9848 383.83 310.5 M SwinTransformer is a novel vision Transformer network that can serve as a general-purpose backbone for computer vision tasks. SwinTransformer consists of a hierarchical Transformer structure represented by shifted windows. Shifted windows restrict self-attention computations to non-overlapping local windows while allowing cross-window connections, thereby enhancing network performance.
SwinTransformer_base_patch4_window12_384Inference Model/Trained Model 84.17 37.2855 1178.63 311.4 M
SwinTransformer_large_patch4_window7_224Inference Model/Trained Model 86.19 27.5498 689.729 694.8 M
SwinTransformer_large_patch4_window12_384Inference Model/Trained Model 87.06 74.1768 2105.22 696.1 M
SwinTransformer_small_patch4_window7_224Inference Model/Trained Model 83.21 16.3982 285.56 175.6 M
SwinTransformer_tiny_patch4_window7_224Inference Model/Trained Model 81.10 8.54846 156.306 100.1 M

Note: The above accuracy metrics refer to Top-1 Accuracy on the ImageNet-1k validation set. All model GPU inference times are based on NVIDIA Tesla T4 machines, with precision type FP32. CPU inference speeds are based on Intel® Xeon® Gold 5117 CPU @ 2.00GHz, with 8 threads and precision type FP32.

2. Quick Start

PaddleX provides pre-trained model pipelines that can be quickly experienced. You can experience the effects of the General Image Classification Pipeline online or locally using command line or Python.

2.1 Online Experience

You can experience online the effects of the General Image Classification Pipeline using the demo images provided by the official. For example:

If you are satisfied with the pipeline's performance, you can directly integrate and deploy it. If not, you can also use your private data to fine-tune the model within the pipeline.

2.2 Local Experience

Before using the General Image Classification Pipeline locally, ensure you have installed the PaddleX wheel package following the PaddleX Local Installation Tutorial.

2.2.1 Command Line Experience

A single command is all you need to quickly experience the image classification pipeline, Use the test file, and replace --input with the local path to perform prediction.

paddlex --pipeline image_classification --input general_image_classification_001.jpg --device gpu:0
Parameter Explanation:

--pipeline: The name of the pipeline, here it is the image classification pipeline.
--input: The local path or URL of the input image to be processed.
--device: The GPU index to use (e.g., gpu:0 for the first GPU, gpu:1,2 for the second and third GPUs). You can also choose to use CPU (--device cpu).

When executing the above command, the default image classification pipeline configuration file is loaded. If you need to customize the configuration file, you can execute the following command to obtain it:

👉Click to expand
paddlex --get_pipeline_config image_classification

After execution, the image classification pipeline configuration file will be saved in the current path. If you wish to customize the save location, you can execute the following command (assuming the custom save location is ./my_path):

paddlex --get_pipeline_config image_classification --save_path ./my_path

After obtaining the pipeline configuration file, replace --pipeline with the configuration file's save path to make the configuration file take effect. For example, if the configuration file's save path is ./image_classification.yaml, simply execute:

paddlex --pipeline ./image_classification.yaml --input general_image_classification_001.jpg --device gpu:0

Here, parameters such as --model and --device do not need to be specified, as they will use the parameters in the configuration file. If you still specify parameters, the specified parameters will take precedence.

After running, the result will be:

{'input_path': 'general_image_classification_001.jpg', 'class_ids': [296, 170, 356, 258, 248], 'scores': [0.62736, 0.03752, 0.03256, 0.0323, 0.03194], 'label_names': ['ice bear, polar bear, Ursus Maritimus, Thalarctos maritimus', 'Irish wolfhound', 'weasel', 'Samoyed, Samoyede', 'Eskimo dog, husky']}

The visualized image not saved by default. You can customize the save path through --save_path, and then all results will be saved in the specified path.

2.2.2 Integration via Python Script

A few lines of code can complete the quick inference of the pipeline. Taking the general image classification pipeline as an example:

from paddlex import create_pipeline

pipeline = create_pipeline(pipeline="image_classification")

output = pipeline.predict("general_image_classification_001.jpg")
for res in output:
    res.print()  # Print the structured output of the prediction
    res.save_to_img("./output/")  # Save the visualization image of the result
    res.save_to_json("./output/")  # Save the structured output of the prediction
The results obtained are the same as those obtained through the command line method.

In the above Python script, the following steps are executed:

(1) Instantiate the create_pipeline to create a pipeline object: The specific parameter descriptions are as follows:

Parameter Description Type Default
pipeline The name of the pipeline or the path to the pipeline configuration file. If it is the name of the pipeline, it must be a pipeline supported by PaddleX. str None
device The device for pipeline model inference. Supports: "gpu", "cpu". str "gpu"
use_hpip Whether to enable high-performance inference, which is only available when the pipeline supports it. bool False

(2) Call the predict method of the image classification pipeline object for inference prediction: The predict method parameter is x, which is used to input data to be predicted, supporting multiple input methods, as shown in the following examples:

Parameter Type Description
Python Var Supports directly passing Python variables, such as numpy.ndarray representing image data.
str Supports passing the path of the file to be predicted, such as the local path of an image file: /root/data/img.jpg.
str Supports passing the URL of the file to be predicted, such as the network URL of an image file: Example.
str Supports passing a local directory, which should contain files to be predicted, such as the local path: /root/data/.
dict Supports passing a dictionary type, where the key needs to correspond to the specific task, such as "img" for the image classification task, and the value of the dictionary supports the above data types, e.g., {"img": "/root/data1"}.
list Supports passing a list, where the list elements need to be the above data types, such as [numpy.ndarray, numpy.ndarray], ["/root/data/img1.jpg", "/root/data/img2.jpg"], ["/root/data1", "/root/data2"], [{"img": "/root/data1"}, {"img": "/root/data2/img.jpg"}].

3)Obtain prediction results by calling the predict method: The predict method is a generator, so prediction results need to be obtained through iteration. The predict method predicts data in batches, so the prediction results are in the form of a list.

(4)Process the prediction results: The prediction result for each sample is of dict type and supports printing or saving to files, with the supported file types depending on the specific pipeline. For example:

Method Description Method Parameters
print Prints results to the terminal - format_json: bool, whether to format the output content with json indentation, default is True;
- indent: int, json formatting setting, only valid when format_json is True, default is 4;
- ensure_ascii: bool, json formatting setting, only valid when format_json is True, default is False;
save_to_json Saves results as a json file - save_path: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type;
- indent: int, json formatting setting, default is 4;
- ensure_ascii: bool, json formatting setting, default is False;
save_to_img Saves results as an image file - save_path: str, the path to save the file, when it's a directory, the saved file name is consistent with the input file type;

If you have a configuration file, you can customize the configurations of the image anomaly detection pipeline by simply modifying the pipeline parameter in the create_pipeline method to the path of the pipeline configuration file.

For example, if your configuration file is saved at ./my_path/image_classification.yaml, you only need to execute:

from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="./my_path/image_classification.yaml")
output = pipeline.predict("general_image_classification_001.jpg")
for res in output:
    res.print()  # Print the structured output of prediction
    res.save_to_img("./output/")  # Save the visualization image of the result
    res.save_to_json("./output/")  # Save the structured output of prediction

3. Development Integration/Deployment

If the pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.

If you need to apply the pipeline directly in your Python project, refer to the example code in 2.2.2 Python Script Integration.

Additionally, PaddleX provides three other deployment methods, detailed as follows:

🚀 High-Performance Inference: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. To this end, PaddleX provides high-performance inference plugins aimed at deeply optimizing model inference and pre/post-processing for significant end-to-end speedups. For detailed high-performance inference procedures, refer to the PaddleX High-Performance Inference Guide.

☁️ Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports users in achieving low-cost service-oriented deployment of pipelines. For detailed service-oriented deployment procedures, refer to the PaddleX Service-Oriented Deployment Guide.

Below are the API references and multi-language service invocation examples:

API Reference

For main operations provided by the service:

  • The HTTP request method is POST.
  • The request body and the response body are both JSON data (JSON objects).
  • When the request is processed successfully, the response status code is 200, and the response body properties are as follows:
Name Type Description
errorCode integer Error code. Fixed as 0.
errorMsg string Error message. Fixed as "Success".

The response body may also have a result property of type object, which stores the operation result information.

  • When the request is not processed successfully, the response body properties are as follows:
Name Type Description
errorCode integer Error code. Same as the response status code.
errorMsg string Error message.

Main operations provided by the service are as follows:

  • infer

Classify images.

POST /image-classification

  • The request body properties are as follows:
Name Type Description Required
image string The URL of an image file accessible by the service or the Base64 encoded result of the image file content. Yes
inferenceParams object Inference parameters. No

The properties of inferenceParams are as follows:

Name Type Description Required
topK integer Only the top topK categories with the highest scores will be retained in the results. No
  • When the request is processed successfully, the result of the response body has the following properties:
Name Type Description
categories array Image category information.
image string The image classification result image. The image is in JPEG format and encoded using Base64.

Each element in categories is an object with the following properties:

Name Type Description
id integer Category ID.
name string Category name.
score number Category score.

An example of result is as follows:

{
"categories": [
{
"id": 5,
"name": "Rabbit",
"score": 0.93
}
],
"image": "xxxxxx"
}
Multi-Language Service Invocation Examples
Python
import base64
import requests

API_URL = "http://localhost:8080/image-classification"
image_path = "./demo.jpg"
output_image_path = "./out.jpg"

with open(image_path, "rb") as file:
    image_bytes = file.read()
    image_data = base64.b64encode(image_bytes).decode("ascii")

payload = {"image": image_data}

response = requests.post(API_URL, json=payload)

assert response.status_code == 200
result = response.json()["result"]
with open(output_image_path, "wb") as file:
    file.write(base64.b64decode(result["image"]))
print(f"Output image saved at {output_image_path}")
print("\nCategories:")
print(result["categories"])
C++
#include <iostream>
#include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib
#include "nlohmann/json.hpp" // https://github.com/nlohmann/json
#include "base64.hpp" // https://github.com/tobiaslocker/base64

int main() {
    httplib::Client client("localhost:8080");
    const std::string imagePath = "./demo.jpg";
    const std::string outputImagePath = "./out.jpg";

    httplib::Headers headers = {
        {"Content-Type", "application/json"}
    };

    std::ifstream file(imagePath, std::ios::binary | std::ios::ate);
    std::streamsize size = file.tellg();
    file.seekg(0, std::ios::beg);

    std::vector<char> buffer(size);
    if (!file.read(buffer.data(), size)) {
        std::cerr << "Error reading file." << std::endl;
        return 1;
    }
    std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size());
    std::string encodedImage = base64::to_base64(bufferStr);

    nlohmann::json jsonObj;
    jsonObj["image"] = encodedImage;
    std::string body = jsonObj.dump();

    auto response = client.Post("/image-classification", headers, body, "application/json");
    if (response && response->status == 200) {
        nlohmann::json jsonResponse = nlohmann::json::parse(response->body);
        auto result = jsonResponse["result"];

        encodedImage = result["image"];
        std::string decodedString = base64::from_base64(encodedImage);
        std::vector<unsigned char> decodedImage(decodedString.begin(), decodedString.end());
        std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out);
        if (outputImage.is_open()) {
            outputImage.write(reinterpret_cast<char*>(decodedImage.data()), decodedImage.size());
            outputImage.close();
            std::cout << "Output image saved at " << outPutImagePath << std::endl;
        } else {
            std::cerr << "Unable to open file for writing: " << outPutImagePath << std::endl;
        }

        auto categories = result["categories"];
        std::cout << "\nCategories:" << std::endl;
        for (const auto& category : categories) {
            std::cout << category << std::endl;
        }
    } else {
        std::cout << "Failed to send HTTP request." << std::endl;
        return 1;
    }

    return 0;
}
Java
import okhttp3.*;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.node.ObjectNode;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Base64;

public class Main {
    public static void main(String[] args) throws IOException {
        String API_URL = "http://localhost:8080/image-classification";
        String imagePath = "./demo.jpg";
        String outputImagePath = "./out.jpg";

        File file = new File(imagePath);
        byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
        String imageData = Base64.getEncoder().encodeToString(fileContent);

        ObjectMapper objectMapper = new ObjectMapper();
        ObjectNode params = objectMapper.createObjectNode();
        params.put("image", imageData);

        OkHttpClient client = new OkHttpClient();
        MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
        RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
        Request request = new Request.Builder()
                .url(API_URL)
                .post(body)
                .build();

        try (Response response = client.newCall(request).execute()) {
            if (response.isSuccessful()) {
                String responseBody = response.body().string();
                JsonNode resultNode = objectMapper.readTree(responseBody);
                JsonNode result = resultNode.get("result");
                String base64Image = result.get("image").asText();
                JsonNode categories = result.get("categories");

                byte[] imageBytes = Base64.getDecoder().decode(base64Image);
                try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
                    fos.write(imageBytes);
                }
                System.out.println("Output image saved at " + outputImagePath);
                System.out.println("\nCategories: " + categories.toString());
            } else {
                System.err.println("Request failed with code: " + response.code());
            }
        }
    }
}
Go
package main

import (
    "bytes"
    "encoding/base64"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "net/http"
)

func main() {
    API_URL := "http://localhost:8080/image-classification"
    imagePath := "./demo.jpg"
    outputImagePath := "./out.jpg"

    imageBytes, err := ioutil.ReadFile(imagePath)
    if err != nil {
        fmt.Println("Error reading image file:", err)
        return
    }
    imageData := base64.StdEncoding.EncodeToString(imageBytes)

    payload := map[string]string{"image": imageData}
    payloadBytes, err := json.Marshal(payload)
    if err != nil {
        fmt.Println("Error marshaling payload:", err)
        return
    }

    client := &http.Client{}
    req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
    if err != nil {
        fmt.Println("Error creating request:", err)
        return
    }

    res, err := client.Do(req)
    if err != nil {
        fmt.Println("Error sending request:", err)
        return
    }
    defer res.Body.Close()

    body, err := ioutil.ReadAll(res.Body)
    if err != nil {
        fmt.Println("Error reading response body:", err)
        return
    }
    type Response struct {
        Result struct {
            Image      string   `json:"image"`
            Categories []map[string]interface{} `json:"categories"`
        } `json:"result"`
    }
    var respData Response
    err = json.Unmarshal([]byte(string(body)), &respData)
    if err != nil {
        fmt.Println("Error unmarshaling response body:", err)
        return
    }

    outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
    if err != nil {
        fmt.Println("Error decoding base64 image data:", err)
        return
    }
    err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
    if err != nil {
        fmt.Println("Error writing image to file:", err)
        return
    }
    fmt.Printf("Image saved at %s.jpg\n", outputImagePath)
    fmt.Println("\nCategories:")
    for _, category := range respData.Result.Categories {
        fmt.Println(category)
    }
}
C#
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json.Linq;

class Program
{
    static readonly string API_URL = "http://localhost:8080/image-classification";
    static readonly string imagePath = "./demo.jpg";
    static readonly string outputImagePath = "./out.jpg";

    static async Task Main(string[] args)
    {
        var httpClient = new HttpClient();

        byte[] imageBytes = File.ReadAllBytes(imagePath);
        string image_data = Convert.ToBase64String(imageBytes);

        var payload = new JObject{ { "image", image_data } };
        var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");

        HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
        response.EnsureSuccessStatusCode();

        string responseBody = await response.Content.ReadAsStringAsync();
        JObject jsonResponse = JObject.Parse(responseBody);

        string base64Image = jsonResponse["result"]["image"].ToString();
        byte[] outputImageBytes = Convert.FromBase64String(base64Image);

        File.WriteAllBytes(outputImagePath, outputImageBytes);
        Console.WriteLine($"Output image saved at {outputImagePath}");
        Console.WriteLine("\nCategories:");
        Console.WriteLine(jsonResponse["result"]["categories"].ToString());
    }
}
Node.js
const axios = require('axios');
const fs = require('fs');

const API_URL = 'http://localhost:8080/image-classification'
const imagePath = './demo.jpg'
const outputImagePath = "./out.jpg";

let config = {
   method: 'POST',
   maxBodyLength: Infinity,
   url: API_URL,
   data: JSON.stringify({
    'image': encodeImageToBase64(imagePath)
  })
};

function encodeImageToBase64(filePath) {
  const bitmap = fs.readFileSync(filePath);
  return Buffer.from(bitmap).toString('base64');
}

axios.request(config)
.then((response) => {
    const result = response.data["result"];
    const imageBuffer = Buffer.from(result["image"], 'base64');
    fs.writeFile(outputImagePath, imageBuffer, (err) => {
      if (err) throw err;
      console.log(`Output image saved at ${outputImagePath}`);
    });
    console.log("\nCategories:");
    console.log(result["categories"]);
})
.catch((error) => {
  console.log(error);
});
PHP
<?php

$API_URL = "http://localhost:8080/image-classification";
$image_path = "./demo.jpg";
$output_image_path = "./out.jpg";

$image_data = base64_encode(file_get_contents($image_path));
$payload = array("image" => $image_data);

$ch = curl_init($API_URL);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);

$result = json_decode($response, true)["result"];
file_put_contents($output_image_path, base64_decode($result["image"]));
echo "Output image saved at " . $output_image_path . "\n";
echo "\nCategories:\n";
print_r($result["categories"]);
?>


📱 Edge Deployment: Edge deployment is a method that places computing and data processing functions on user devices themselves, allowing devices to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, refer to the PaddleX Edge Deployment Guide. You can choose the appropriate deployment method for your model pipeline based on your needs and proceed with subsequent AI application integration.

4. Custom Development

If the default model weights provided by the general image classification pipeline do not meet your requirements for accuracy or speed in your specific scenario, you can try to further fine-tune the existing model using data from your specific domain or application scenario to improve the recognition performance of the general image classification pipeline in your scenario.

4.1 Model Fine-tuning

Since the general image classification pipeline includes an image classification module, if the performance of the pipeline does not meet expectations, you need to refer to the Customization section in the Image Classification Module Development Tutorial and use your private dataset to fine-tune the image classification model.

4.2 Model Application

After you have completed fine-tuning training using your private dataset, you will obtain local model weight files.

If you need to use the fine-tuned model weights, simply modify the pipeline configuration file by replacing the local path of the fine-tuned model weights to the corresponding location in the pipeline configuration file:

......
Pipeline:
  model: PP-LCNet_x1_0  # Can be modified to the local path of the fine-tuned model
  device: "gpu"
  batch_size: 1
......
Then, refer to the command line method or Python script method in the local experience section to load the modified pipeline configuration file.

5. Multi-hardware Support

PaddleX supports various mainstream hardware devices such as NVIDIA GPUs, Kunlun XPU, Ascend NPU, and Cambricon MLU. Simply modify the --device parameter to seamlessly switch between different hardware.

For example, if you use an NVIDIA GPU for inference in the image classification pipeline, the Python command is:

paddlex --pipeline image_classification --input general_image_classification_001.jpg --device gpu:0
``````
At this point, if you wish to switch the hardware to Ascend NPU, simply modify the `--device` in the Python command to `npu:0`:

```bash
paddlex --pipeline image_classification --input general_image_classification_001.jpg --device npu:0
If you want to use the General Image Classification Pipeline on more types of hardware, please refer to the PaddleX Multi-Device Usage Guide.

Comments