Open Vocabulary Segmentation Pipeline User Guide¶
1. Introduction to Open Vocabulary Segmentation Pipeline¶
Open vocabulary segmentation is an image segmentation task that aims to segment objects in an image based on text descriptions, bounding boxes, key points, and other information besides the image itself. It allows the model to handle a wide range of object categories without a predefined category list. This technology combines visual and multimodal techniques, greatly enhancing the flexibility and accuracy of image processing. Open vocabulary segmentation has significant application value in the field of computer vision, especially in object segmentation tasks in complex scenarios. This pipeline also provides flexible service deployment options, supporting multiple programming languages on various hardware. Currently, this pipeline does not support secondary development of the model, but it is planned to be supported in the future.
The general open vocabulary segmentation pipeline includes an open vocabulary segmentation module. You can choose the model based on the benchmark data below.
If you prioritize model accuracy, choose a model with higher accuracy; if you prioritize inference speed, choose a model with faster inference speed; if you prioritize storage size, choose a model with a smaller storage size.
General Image Open Vocabulary Segmentation Module (Optional):
Model | Model Download Link | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) | Model Storage Size (M) | Description |
---|---|---|---|---|---|
SAM-H_box | Inference Model | 144.9 | 33920.7 | 2433.7 | SAM (Segment Anything Model) is an advanced image segmentation model that can segment any object in an image based on simple prompts provided by the user (such as points, boxes, or text). Trained on the SA-1B dataset, which contains ten million images and one billion mask annotations, it performs well in most scenarios. SAM-H_box uses a box as the segmentation prompt input, and SAM will segment the main subject enclosed by the box; SAM-H_point uses a point as the segmentation prompt input, and SAM will segment the subject at the point. |
SAM-H_point | Inference Model | 144.9 | 33920.7 | 2433.7 |
Test Environment Description:
- Performance Test Environment
-
Hardware Configuration:
- GPU: NVIDIA Tesla T4
- CPU: Intel Xeon Gold 6271C @ 2.60GHz
- Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
-
Inference Mode Description
Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
---|---|---|---|
Normal Mode | FP32 Precision / No TRT Acceleration | FP32 Precision / 8 Threads | PaddleInference |
High-Performance Mode | Optimal combination of pre-selected precision types and acceleration strategies | FP32 Precision / 8 Threads | Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.) |
2. Quick Start¶
2.1 Local Experience¶
❗ Before using the general open vocabulary segmentation pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the PaddleX Local Installation Guide.
2.1.1 Command Line Experience¶
- You can quickly experience the open vocabulary segmentation pipeline with a single command. Use the test file and replace
--input
with your local path for prediction.
paddlex --pipeline open_vocabulary_segmentation \
--input open_vocabulary_segmentation.jpg \
--prompt_type box \
--prompt "[[112.9,118.4,513.8,382.1],[4.6,263.6,92.2,336.6],[592.4,260.9,607.2,294.2]]" \
--save_path ./output \
--device gpu:0
The relevant parameter description can be found in the parameter description in 2.1.2 Integration via Python Script.
After running, the result will be printed to the terminal, as follows:
{'res': {'input_path': 'open_vocabulary_segmentation.jpg', 'prompts': {'box_prompt': [[112.9, 118.4, 513.8, 382.1], [4.6, 263.6, 92.2, 336.6], [592.4, 260.9, 607.2, 294.2]]}, 'masks': '...', 'mask_infos': [{'label': 'box_prompt', 'prompt': [112.9, 118.4, 513.8, 382.1]}, {'label': 'box_prompt', 'prompt': [4.6, 263.6, 92.2, 336.6]}, {'label': 'box_prompt', 'prompt': [592.4, 260.9, 607.2, 294.2]}]}}
The explanation of the running result parameters can be found in the result explanation of 2.1.2 Python Script Integration.
The visualization results are saved under save_path
, and the visualization result of open vocabulary segmentation is as follows:
2.1.2 Python Script Integration¶
- The above command line is for quickly experiencing the effect. Generally, in a project, integration through code is often required. You can complete the rapid inference of the pipeline with just a few lines of code. The inference code is as follows:
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline_name="open_vocabulary_segmentation")
output = pipeline.predict(input="open_vocabulary_segmentation.jpg", prompt_type="box", prompt=[[112.9,118.4,513.8,382.1],[4.6,263.6,92.2,336.6],[592.4,260.9,607.2,294.2]])
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/")
In the above Python script, the following steps are executed:
(1) The create_pipeline()
function is used to instantiate an Open Vocabulary Segmentation pipeline object, with the specific parameter descriptions as follows:
Parameter | Parameter Description | Parameter Type | Default Value |
---|---|---|---|
pipeline_name |
The name of the pipeline, which must be supported by PaddleX. | str |
None |
config |
The path to the pipeline configuration file. | str |
None |
device |
The inference device for the pipeline. It supports specifying the exact card number for GPU, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu". | str |
None |
use_hpip |
Whether to enable high-performance inference, which is only available if the pipeline supports high-performance inference. | bool |
False |
(2) The predict()
method of the Open Vocabulary Segmentation pipeline object is called to perform inference prediction. This method returns a generator
. Below are the parameters and their descriptions for the predict()
method:
Parameter | Parameter Description | Parameter Type | Options | Default Value |
---|---|---|---|---|
input |
The data to be predicted, supporting multiple input types (required). | Python Var|str|list |
|
None |
device |
The inference device for the pipeline. | str|None |
|
None |
prompt_type |
The type of prompt used during model inference. | str |
|
None |
prompt |
The specific prompt used during model inference. | list[list[float]] |
|
None |
(3) Process the prediction results. The prediction result for each sample is of the dict
type and supports operations such as printing, saving as an image, and saving as a json
file:
Method | Description | Parameter | Parameter Type | Parameter Description | Default Value |
---|---|---|---|---|---|
print() |
Print the result to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode . When set to True , all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True |
False |
||
save_to_json() |
Save the result as a JSON file | save_path |
str |
Path to save the file. When it is a directory, the saved file name is consistent with the input file type naming | None |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode . When set to True , all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True |
False |
||
save_to_img() |
Save the result as an image file | save_path |
str |
Path to save the file, supports directory or file path | None |
-
Calling the
print()
method will print the result to the terminal, with the printed content explained as follows:-
input_path
:(str)
The input path of the image to be predicted -
page_index
:(Union[int, None])
If the input is a PDF file, it indicates which page of the PDF it is, otherwise it isNone
-
prompts
:(dict)
The original prompt information used for predicting the image -
masks
:...
The actual predicted masks of the segmentation model. Due to the large data size, it is replaced with...
for printing. You can save the prediction results as an image usingres.save_to_img
or as a JSON file usingres.save_to_json
. -
mask_infos
:(list)
Segmentation result information corresponding to the elements inmasks
, with the same length asmasks
. Each element is a dictionary containing the following fields label
:(str)
The type of prompt used to predict the corresponding element inmasks
, such asbox_prompt
indicating that the corresponding mask was obtained using a bounding box as the promptprompt
:list
The specific prompt information used for predicting the corresponding element inmasks
-
-
Calling the
save_to_json()
method will save the above content to the specifiedsave_path
. If specified as a directory, the saved path will besave_path/{your_img_basename}_res.json
; if specified as a file, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, thenumpy.array
types will be converted to lists. -
Calling the
save_to_img()
method will save the visualization results to the specifiedsave_path
. If specified as a directory, the saved path will besave_path/{your_img_basename}_res.{your_img_extension}
; if specified as a file, it will be saved directly to that file. -
Additionally, it also supports obtaining visualized images and prediction results through attributes, as follows:
Attribute | Attribute Description |
---|---|
json |
Get the predicted json format result |
img |
Get the visualized image in dict format |
- The prediction result obtained by the
json
attribute is a dict type of data, with content consistent with the content saved by calling thesave_to_json()
method. - The prediction result returned by the
img
attribute is a dictionary type of data. The key isres
, and the corresponding value is anImage.Image
object used for visualizing the open vocabulary segmentation results.
In addition, you can obtain the open vocabulary segmentation pipeline configuration file and load the configuration file for prediction. You can execute the following command to save the result in my_path
:
If you have obtained the configuration file, you can customize the settings for the open vocabulary segmentation pipeline. Simply modify the value of the pipeline
parameter in the create_pipeline
method to the path of the pipeline configuration file. An example is as follows:
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="./my_path/open_vocabulary_segmentation.yaml")
output = pipeline.predict(
input="./open_vocabulary_segmentation.jpg",
prompt_type="box",
prompt=[[112.9,118.4,513.8,382.1],[4.6,263.6,92.2,336.6],[592.4,260.9,607.2,294.2]]
)
for res in output:
res.print()
res.save_to_img("./output/")
res.save_to_json("./output/")
Note: The parameters in the configuration file are for pipeline initialization. If you wish to change the initialization parameters of the general open-vocabulary segmentation pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in the configuration file by specifying the path with --pipeline
.
3. Development Integration/Deployment¶
If the pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
If you need to apply the pipeline directly to your Python project, you can refer to the example code in 2.1.2 Python Script Integration.
Additionally, PaddleX provides three other deployment methods, detailed as follows:
🚀 High-Performance Inference: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. For this purpose, PaddleX provides a high-performance inference plugin, aimed at deeply optimizing the performance of model inference and pre/post-processing, significantly accelerating the end-to-end process. For detailed high-performance inference procedures, please refer to PaddleX High-Performance Inference Guide.
☁️ Service Deployment: Service deployment is a common form of deployment in actual production environments. By encapsulating the inference function as a service, clients can access these services via network requests to obtain inference results. PaddleX supports multiple pipeline service deployment solutions. For detailed pipeline service deployment procedures, please refer to PaddleX Service Deployment Guide.
Below are the API references and multi-language service invocation examples for basic service deployment:
API Reference
For the main operations provided by the service:
- The HTTP request method is POST.
- Both the request body and response body are JSON data (JSON objects).
- When the request is processed successfully, the response status code is
200
, and the response body has the following attributes:
Name | Type | Meaning |
---|---|---|
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Fixed at 0 . |
errorMsg |
string |
Error description. Fixed at "Success" . |
result |
object |
The result of the operation. |
- When the request is not processed successfully, the response body has the following attributes:
Name | Type | Meaning |
---|---|---|
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Same as the response status code. |
errorMsg |
string |
Error description. |
The main operations provided by the service are as follows:
infer
Perform object segmentation on an image.
POST /open-vocabulary-segmentation
- The attributes of the request body are as follows:
Name | Type | Meaning | Required |
---|---|---|---|
image |
string |
The URL of the image file accessible by the server or the Base64 encoded result of the image file content. | Yes |
prompt |
array |
The prompt used for prediction. | Yes |
promptType |
string |
The type of prompt used for prediction. | Yes |
- When the request is processed successfully, the
result
in the response body has the following attributes:
Name | Type | Meaning |
---|---|---|
masks |
array |
The segmentation prediction results. |
maskInfos |
array |
Corresponds one-to-one with the elements in the masks field, recording the prompts used for the corresponding segmentation results in masks . |
image |
string |
The segmentation result image. The image is in JPEG format and encoded in Base64. |
masks
field are encoded with rle
. To obtain the original segmentation results, you need to decode them using pycocotools.mask.decode
.
Each element in detectedObjects
is an object
with the following attributes:
Name | Type | Meaning |
---|---|---|
label |
string |
The category of the prompt used to generate the mask. |
prompt |
array |
The prompt array. |
result
example is as follows:
{
'masks': [rle_mask1, rle_mask2, rle_mask3]
'mask_infos': [
{'label': 'box_prompt', 'prompt': [112.9, 118.4, 513.8, 382.1]},
{'label': 'box_prompt', 'prompt': [4.6, 263.6, 92.2, 336.6]},
{'label': 'box_prompt', 'prompt': [592.4, 260.9, 607.2, 294.2]}
]
}
Multi-language Service Call Example
Python
import base64
import requests
API_URL = "http://localhost:8080/open-vocabulary-segmentation" # Service URL
image_path = "./open_vocabulary_segmentation.jpg"
output_image_path = "./out.jpg"
# Encode the local image with Base64
with open(image_path, "rb") as file:
image_bytes = file.read()
image_data = base64.b64encode(image_bytes).decode("ascii")
payload = {
"image": image_data, # Base64-encoded file content or image URL
"prompt_type": "box",
"prompt": [[112.9,118.4,513.8,382.1],[4.6,263.6,92.2,336.6],[592.4,260.9,607.2,294.2]]
}
# Call the API
response = requests.post(API_URL, json=payload)
# Process the returned data from the API
assert response.status_code == 200
result = response.json()["result"]
image_base64 = result["image"]
image = base64.b64decode(image_base64)
with open(output_image_path, "wb") as file:
file.write(base64.b64decode(result["image"]))
print(f"Output image saved at {output_image_path}")
print("\nresult(with rle encoded binary mask):")
print(result)
📱 Edge Deployment: Edge deployment is a method of placing computing and data processing capabilities on the user's device itself, allowing the device to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the PaddleX Edge Deployment Guide. You can choose the appropriate method to deploy the model pipeline according to your needs, and then proceed with subsequent AI application integration.
4. Secondary Development¶
The current pipeline temporarily does not support fine-tuning training, only inference integration is supported. Fine-tuning training for this pipeline is planned to be supported in the future.
5. Multi-Hardware Support¶
The current pipeline temporarily only supports GPU and CPU inference. Adaptation to more hardware for this pipeline is planned to be supported in the future.