Open Vocabulary Detection Pipeline User Guide¶
1. Introduction to Open Vocabulary Detection Pipeline¶
Open vocabulary object detection is an advanced object detection technology that aims to overcome the limitations of traditional object detection. Traditional methods can only recognize objects of predefined categories, while open vocabulary object detection allows the model to recognize objects that did not appear during training. By combining natural language processing techniques, new categories can be defined using text descriptions, enabling the model to recognize and locate these new objects. This makes object detection more flexible and generalizable, with significant application prospects. This pipeline also provides flexible service deployment options, supporting multiple programming languages on various hardware. Currently, this pipeline does not support secondary development of the model, but it is planned to be supported in the future.
The general open vocabulary detection pipeline includes an open vocabulary detection module. You can choose the model based on the benchmark data below.
If you prioritize model accuracy, choose a model with higher accuracy; if you prioritize inference speed, choose a model with faster inference speed; if you prioritize storage size, choose a model with a smaller storage size.
General Image Open Vocabulary Detection Module (Optional):
Model | Model Download Link | mAP(0.5:0.95) | mAP(0.5) | GPU Inference Time (ms) [Normal Mode / High-Performance Mode] |
CPU Inference Time (ms) | Model Storage Size (M) | Description |
---|---|---|---|---|---|---|---|
GroundingDINO-T | Inference Model | 49.4 | 64.4 | 253.72 | 1807.4 | 658.3 | An open vocabulary object detection model trained on O365, GoldG, and Cap4M datasets. The text encoder uses Bert, and the visual model part adopts DINO overall, with additional cross-modal fusion modules designed, achieving good results in the field of open vocabulary object detection. |
Test Environment Description:
- Performance Test Environment
- Test Dataset: COCO val2017 validation set
-
Hardware Configuration:
- GPU: NVIDIA Tesla T4
- CPU: Intel Xeon Gold 6271C @ 2.60GHz
- Other Environments: Ubuntu 20.04 / cuDNN 8.6 / TensorRT 8.5.2.2
-
Inference Mode Description
Mode | GPU Configuration | CPU Configuration | Acceleration Technology Combination |
---|---|---|---|
Normal Mode | FP32 Precision / No TRT Acceleration | FP32 Precision / 8 Threads | PaddleInference |
High-Performance Mode | Optimal combination of pre-selected precision types and acceleration strategies | FP32 Precision / 8 Threads | Pre-selected optimal backend (Paddle/OpenVINO/TRT, etc.) |
2. Quick Start¶
2.1 Local Experience¶
❗ Before using the general open vocabulary detection pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the PaddleX Local Installation Guide.
2.1.1 Command Line Experience¶
- You can quickly experience the open vocabulary detection pipeline with a single command. Use the test file and replace
--input
with your local path for prediction.
Due to network issues, the above web page parsing was not successful. If you need the content of the web page, please check the validity of the web page link and try again. If you do not need the parsing of this link, you can proceed with other questions.
paddlex --pipeline open_vocabulary_detection \
--input open_vocabulary_detection.jpg \
--prompt "bus . walking man . rearview mirror ." \
--thresholds "{'text_threshold': 0.25, 'box_threshold': 0.3}" \
--save_path ./output \
--device gpu:0
The relevant parameter description can be found in the parameter description in 2.1.2 Integration via Python Script.
After running, the result will be printed to the terminal, as follows:
{'res': {'input_path': 'open_vocabulary_detection.jpg', 'page_index': None, 'boxes': [{'coordinate': [112.10542297363281, 117.93667602539062, 514.35693359375, 382.10150146484375], 'label': 'bus', 'score': 0.9348853230476379}, {'coordinate': [264.1828918457031, 162.6674346923828, 286.8844909667969, 201.86187744140625], 'label': 'rearview mirror', 'score': 0.6022508144378662}, {'coordinate': [606.1133422851562, 254.4973907470703, 622.56982421875, 293.7867126464844], 'label': 'walking man', 'score': 0.4384709894657135}, {'coordinate': [591.8192138671875, 260.2451171875, 607.3953247070312, 294.2210388183594], 'label': 'man', 'score': 0.3573091924190521}]}}
The explanation of the running result parameters can refer to the result explanation in 2.1.2 Python Script Integration.
The visualization results are saved under save_path
, and the visualization results of open vocabulary detection are as follows:
2.1.2 Python Script Integration¶
- The above command line is for quickly experiencing the effect. Generally, in a project, it is often necessary to integrate through code. You can complete the rapid inference of the pipeline with just a few lines of code. The inference code is as follows:
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline_name="open_vocabulary_detection")
output = pipeline.predict(input="open_vocabulary_detection.jpg", prompt="bus . walking man . rearview mirror .")
for res in output:
res.print()
res.save_to_img(save_path="./output/")
res.save_to_json(save_path="./output/")
In the above Python script, the following steps are executed:
(1) The create_pipeline()
function is used to instantiate an Open Vocabulary Detection pipeline object, with the specific parameter descriptions as follows:
Parameter | Parameter Description | Parameter Type | Default Value |
---|---|---|---|
pipeline_name |
The name of the pipeline, which must be supported by PaddleX. | str |
None |
config |
The path to the pipeline configuration file. | str |
None |
device |
The inference device for the pipeline. It supports specifying the exact card number for GPU, such as "gpu:0", other hardware card numbers, such as "npu:0", or CPU, such as "cpu". | str |
None |
use_hpip |
Whether to enable high-performance inference, which is only available if the pipeline supports high-performance inference. | bool |
False |
(2) The predict()
method of the Open Vocabulary Detection pipeline object is called to perform inference prediction. This method returns a generator
. Below are the parameters and their descriptions for the predict()
method:
Parameter | Parameter Description | Parameter Type | Options | Default Value |
---|---|---|---|---|
input |
The data to be predicted, supporting multiple input types (required). | Python Var|str|list |
|
None |
device |
The inference device for the pipeline. | str|None |
|
None |
thresholds |
The thresholds used during model inference. | dict[str, float] |
|
None |
prompt |
The prompt used during model inference. | str |
|
None |
(3) Process the prediction results. The prediction result for each sample is of the dict
type and supports operations such as printing, saving as an image, and saving as a json
file:
Method | Description | Parameter | Parameter Type | Parameter Description | Default Value |
---|---|---|---|---|---|
print() |
Print the result to the terminal | format_json |
bool |
Whether to format the output content using JSON indentation |
True |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode . When set to True , all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True |
False |
||
save_to_json() |
Save the result as a JSON file | save_path |
str |
Path to save the file. When it is a directory, the saved file name is consistent with the input file type naming | None |
indent |
int |
Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True |
4 | ||
ensure_ascii |
bool |
Control whether to escape non-ASCII characters to Unicode . When set to True , all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True |
False |
||
save_to_img() |
Save the result as an image file | save_path |
str |
Path to save the file, supports directory or file path | None |
-
Calling the
print()
method will print the result to the terminal, with the printed content explained as follows:-
input_path
:(str)
The input path of the image to be predicted -
page_index
:(Union[int, None])
If the input is a PDF file, it indicates which page of the PDF it is, otherwise it isNone
-
boxes
:(list)
Detection box information, each element is a dictionary containing the following fields label
:(str)
Category namescore
:(float)
Confidence scorecoordinates
:(list)
Detection box coordinates, in the format[xmin, ymin, xmax, ymax]
-
-
Calling the
save_to_json()
method will save the above content to the specifiedsave_path
. If specified as a directory, the saved path will besave_path/{your_img_basename}_res.json
; if specified as a file, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, thenumpy.array
types will be converted to lists. -
Calling the
save_to_img()
method will save the visualization results to the specifiedsave_path
. If specified as a directory, the saved path will besave_path/{your_img_basename}_res.{your_img_extension}
; if specified as a file, it will be saved directly to that file. -
Additionally, it also supports obtaining visualized images and prediction results through attributes, as follows:
Attribute | Attribute Description |
---|---|
json |
Get the predicted json format result |
img |
Get the visualized image in dict format |
- The prediction result obtained by the
json
attribute is a dict type of data, with content consistent with the content saved by calling thesave_to_json()
method. - The prediction result returned by the
img
attribute is a dictionary type of data. The key isres
, and the corresponding value is anImage.Image
object used for visualizing the open vocabulary detection results.
In addition, you can obtain the open vocabulary detection pipeline configuration file and load the configuration file for prediction. You can execute the following command to save the result in my_path
:
If you have obtained the configuration file, you can customize the settings for the open vocabulary detection pipeline. Simply modify the value of the pipeline
parameter in the create_pipeline
method to the path of the pipeline configuration file. An example is as follows:
from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="./my_path/open_vocabulary_detection.yaml")
output = pipeline.predict(
input="./open_vocabulary_detection.jpg",
thresholds={"text_threshold": 0.25, "box_threshold": 0.3},
prompt="cat . dog . bird ."
)
for res in output:
res.print()
res.save_to_img("./output/")
res.save_to_json("./output/")
Note: The parameters in the configuration file are for pipeline initialization. If you wish to change the initialization parameters of the general open vocabulary detection pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in the configuration file by specifying the path with --pipeline
.
3. Development Integration/Deployment¶
If the pipeline meets your requirements for inference speed and accuracy, you can proceed directly with development integration/deployment.
If you need to apply the pipeline directly to your Python project, you can refer to the example code in 2.1.2 Python Script Integration.
Additionally, PaddleX provides three other deployment methods, detailed as follows:
🚀 High-Performance Inference: In actual production environments, many applications have stringent standards for the performance metrics of deployment strategies (especially response speed) to ensure efficient system operation and smooth user experience. For this purpose, PaddleX provides a high-performance inference plugin, aimed at deeply optimizing the performance of model inference and pre/post-processing, significantly accelerating the end-to-end process. For detailed high-performance inference procedures, please refer to PaddleX High-Performance Inference Guide.
☁️ Service Deployment: Service deployment is a common form of deployment in actual production environments. By encapsulating the inference function as a service, clients can access these services via network requests to obtain inference results. PaddleX supports multiple pipeline service deployment solutions. For detailed pipeline service deployment procedures, please refer to PaddleX Service Deployment Guide.
Below are the API references and multi-language service invocation examples for basic service deployment:
API Reference
For the main operations provided by the service:
- The HTTP request method is POST.
- Both the request body and response body are JSON data (JSON objects).
- When the request is processed successfully, the response status code is
200
, and the response body has the following attributes:
Name | Type | Meaning |
---|---|---|
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Fixed at 0 . |
errorMsg |
string |
Error description. Fixed at "Success" . |
result |
object |
Operation result. |
- When the request is not processed successfully, the response body has the following attributes:
Name | Type | Meaning |
---|---|---|
logId |
string |
The UUID of the request. |
errorCode |
integer |
Error code. Same as the response status code. |
errorMsg |
string |
Error description. |
The main operations provided by the service are as follows:
infer
Perform object detection on an image.
POST /open-vocabulary-detection
- The attributes of the request body are as follows:
Name | Type | Meaning | Required |
---|---|---|---|
image |
string |
The URL of the image file accessible by the server or the Base64 encoded result of the image file content. | Yes |
prompt |
string |
The text prompt used for prediction. | Yes |
thresholds |
object | null |
The thresholds used by the model for prediction. | No |
- When the request is processed successfully, the
result
in the response body has the following attributes:
Name | Type | Meaning |
---|---|---|
detectedObjects |
array |
Information about the position, category, etc., of the objects. |
image |
string |
The result image of object detection. The image is in JPEG format and encoded in Base64. |
Each element in detectedObjects
is an object
with the following attributes:
Name | Type | Meaning |
---|---|---|
bbox |
array |
The position of the object. The elements in the array are the x-coordinate of the top-left corner, the y-coordinate of the top-left corner, the x-coordinate of the bottom-right corner, and the y-coordinate of the bottom-right corner, in that order. |
categoryName |
string |
The name of the object category. |
score |
number |
The score of the object. |
An example of result
is as follows:
{
"detectedObjects": [
{
"bbox": [
404.4967956542969,
90.15770721435547,
506.2465515136719,
285.4187316894531
],
"categoryName": "bird",
"score": 0.7418514490127563
},
{
"bbox": [
155.33145141601562,
81.10954284667969,
199.71136474609375,
167.4235382080078
],
"categoryName": "dog",
"score": 0.7328268885612488
}
],
"image": "xxxxxx"
}
Multi-language Service Call Examples
Python
import base64
import requests
API_URL = "http://localhost:8080/open-vocabulary-detection" # Service URL
image_path = "./open_vocabulary_detection.jpg"
output_image_path = "./out.jpg"
# Base64 encode the local image
with open(image_path, "rb") as file:
image_bytes = file.read()
image_data = base64.b64encode(image_bytes).decode("ascii")
payload = {"image": image_data, "prompt": "walking man . bus ."} # Base64 encoded file content or image URL
# Call the API
response = requests.post(API_URL, json=payload)
# Handle the response data
assert response.status_code == 200, f"{response.status_code}"
result = response.json()["result"]
with open(output_image_path, "wb") as file:
file.write(base64.b64decode(result["image"]))
print(f"Output image saved at {output_image_path}")
print("\nDetected objects:")
print(result["detectedObjects"])
C++
#include <iostream>
#include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib
#include "nlohmann/json.hpp" // https://github.com/nlohmann/json
#include "base64.hpp" // https://github.com/tobiaslocker/base64
int main() {
httplib::Client client("localhost:8080");
const std::string imagePath = "./demo.jpg";
const std::string outputImagePath = "./out.jpg";
httplib::Headers headers = {
{"Content-Type", "application/json"}
};
// Base64 encode the local image
std::ifstream file(imagePath, std::ios::binary | std::ios::ate);
std::streamsize size = file.tellg();
file.seekg(0, std::ios::beg);
std::vector<char> buffer(size);
if (!file.read(buffer.data(), size)) {
std::cerr << "Error reading file." << std::endl;
return 1;
}
std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size());
std::string encodedImage = base64::to_base64(bufferStr);
nlohmann::json jsonObj;
jsonObj["image"] = encodedImage;
std::string body = jsonObj.dump();
// Call the API
auto response = client.Post("/small-object-detection", headers, body, "application/json");
// Handle the response data
if (response && response->status == 200) {
nlohmann::json jsonResponse = nlohmann::json::parse(response->body);
auto result = jsonResponse["result"];
encodedImage = result["image"];
std::string decodedString = base64::from_base64(encodedImage);
std::vector<unsigned char> decodedImage(decodedString.begin(), decodedString.end());
std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out);
if (outputImage.is_open()) {
outputImage.write(reinterpret_cast<char*>(decodedImage.data()), decodedImage.size());
outputImage.close();
std::cout << "Output image saved at " << outPutImagePath << std::endl;
} else {
std::cerr << "Unable to open file for writing: " << outPutImagePath << std::endl;
}
auto detectedObjects = result["detectedObjects"];
std::cout << "\nDetected objects:" << std::endl;
for (const auto& category : detectedObjects) {
std::cout << category << std::endl;
}
} else {
std::cout << "Failed to send HTTP request." << std::endl;
return 1;
}
return 0;
}
Java
import okhttp3.*;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.node.ObjectNode;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Base64;
public class Main {
public static void main(String[] args) throws IOException {
String API_URL = "http://localhost:8080/small-object-detection"; // Service URL
String imagePath = "./demo.jpg"; // Local image path
String outputImagePath = "./out.jpg"; // Output image path
// Encode the local image in Base64
File file = new File(imagePath);
byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
String imageData = Base64.getEncoder().encodeToString(fileContent);
ObjectMapper objectMapper = new ObjectMapper();
ObjectNode params = objectMapper.createObjectNode();
params.put("image", imageData); // Base64-encoded file content or image URL
// Create an OkHttpClient instance
OkHttpClient client = new OkHttpClient();
MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
Request request = new Request.Builder()
.url(API_URL)
.post(body)
.build();
// Call the API and process the response data
try (Response response = client.newCall(request).execute()) {
if (response.isSuccessful()) {
String responseBody = response.body().string();
JsonNode resultNode = objectMapper.readTree(responseBody);
JsonNode result = resultNode.get("result");
String base64Image = result.get("image").asText();
JsonNode detectedObjects = result.get("detectedObjects");
byte[] imageBytes = Base64.getDecoder().decode(base64Image);
try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
fos.write(imageBytes);
}
System.out.println("Output image saved at " + outputImagePath);
System.out.println("\nDetected objects: " + detectedObjects.toString());
} else {
System.err.println("Request failed with code: " + response.code());
}
}
}
}
Go
package main
import (
"bytes"
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
)
func main() {
API_URL := "http://localhost:8080/small-object-detection"
imagePath := "./demo.jpg"
outputImagePath := "./out.jpg"
// Encode the local image in Base64
imageBytes, err := ioutil.ReadFile(imagePath)
if err != nil {
fmt.Println("Error reading image file:", err)
return
}
imageData := base64.StdEncoding.EncodeToString(imageBytes)
payload := map[string]string{"image": imageData} // Base64-encoded file content or image URL
payloadBytes, err := json.Marshal(payload)
if err != nil {
fmt.Println("Error marshaling payload:", err)
return
}
// Call the API
client := &http.Client{}
req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
if err != nil {
fmt.Println("Error creating request:", err)
return
}
res, err := client.Do(req)
if err != nil {
fmt.Println("Error sending request:", err)
return
}
defer res.Body.Close()
// Process the response data
body, err := ioutil.ReadAll(res.Body)
if err != nil {
fmt.Println("Error reading response body:", err)
return
}
type Response struct {
Result struct {
Image string `json:"image"`
DetectedObjects []map[string]interface{} `json:"detectedObjects"`
} `json:"result"`
}
var respData Response
err = json.Unmarshal([]byte(string(body)), &respData)
if err != nil {
fmt.Println("Error unmarshaling response body:", err)
return
}
outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
if err != nil {
fmt.Println("Error decoding base64 image data:", err)
return
}
err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
if err != nil {
fmt.Println("Error writing image to file:", err)
return
}
fmt.Printf("Image saved at %s.jpg\n", outputImagePath)
fmt.Println("\nDetected objects:")
for _, category := range respData.Result.DetectedObjects {
fmt.Println(category)
}
}
C#
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json.Linq;
class Program
{
static readonly string API_URL = "http://localhost:8080/small-object-detection";
static readonly string imagePath = "./demo.jpg";
static readonly string outputImagePath = "./out.jpg";
static async Task Main(string[] args)
{
var httpClient = new HttpClient();
// Base64 encode the local image
byte[] imageBytes = File.ReadAllBytes(imagePath);
string image_data = Convert.ToBase64String(imageBytes);
var payload = new JObject{ { "image", image_data } }; // Base64 encoded file content or image URL
var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");
// Call the API
HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
response.EnsureSuccessStatusCode();
// Process the API response
string responseBody = await response.Content.ReadAsStringAsync();
JObject jsonResponse = JObject.Parse(responseBody);
string base64Image = jsonResponse["result"]["image"].ToString();
byte[] outputImageBytes = Convert.FromBase64String(base64Image);
File.WriteAllBytes(outputImagePath, outputImageBytes);
Console.WriteLine($"Output image saved at {outputImagePath}");
Console.WriteLine("\nDetected objects:");
Console.WriteLine(jsonResponse["result"]["detectedObjects"].ToString());
}
}
Node.js
const axios = require('axios');
const fs = require('fs');
const API_URL = 'http://localhost:8080/small-object-detection'
const imagePath = './demo.jpg'
const outputImagePath = "./out.jpg";
let config = {
method: 'POST',
maxBodyLength: Infinity,
url: API_URL,
data: JSON.stringify({
'image': encodeImageToBase64(imagePath) // Base64 encoded file content or image URL
})
};
// Base64 encode the local image
function encodeImageToBase64(filePath) {
const bitmap = fs.readFileSync(filePath);
return Buffer.from(bitmap).toString('base64');
}
// Call the API
axios.request(config)
.then((response) => {
// Process the API response
const result = response.data["result"];
const imageBuffer = Buffer.from(result["image"], 'base64');
fs.writeFile(outputImagePath, imageBuffer, (err) => {
if (err) throw err;
console.log(`Output image saved at ${outputImagePath}`);
});
console.log("\nDetected objects:");
console.log(result["detectedObjects"]);
})
.catch((error) => {
console.log(error);
});
PHP
<?php
$API_URL = "http://localhost:8080/small-object-detection"; // Service URL
$image_path = "./demo.jpg";
$output_image_path = "./out.jpg";
// Base64 encode the local image
$image_data = base64_encode(file_get_contents($image_path));
$payload = array("image" => $image_data); // Base64 encoded file content or image URL
// Call the API
$ch = curl_init($API_URL);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
// Process the API response
$result = json_decode($response, true)["result"];
file_put_contents($output_image_path, base64_decode($result["image"]));
echo "Output image saved at " . $output_image_path . "\n";
echo "\nDetected objects:\n";
print_r($result["detectedObjects"]);
?>
📱 Edge Deployment: Edge deployment is a method of placing computing and data processing capabilities on the user's device itself, allowing the device to process data directly without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed edge deployment procedures, please refer to the PaddleX Edge Deployment Guide. You can choose the appropriate method to deploy the model pipeline according to your needs, and then proceed with subsequent AI application integration.
4. Secondary Development¶
The current pipeline temporarily does not support fine-tuning training, only inference integration is supported. Fine-tuning training for this pipeline is planned to be supported in the future.
5. Multi-Hardware Support¶
The current pipeline temporarily only supports GPU and CPU inference. Adaptation to more hardware for this pipeline is planned to be supported in the future.