Skip to content

Multilingual Speech Recognition pipeline User Guide

1. Introduction to Multilingual Speech Recognition pipeline

Speech recognition is an advanced tool that can automatically convert spoken languages into corresponding text or commands. This technology plays an important role in various fields such as intelligent customer service, voice assistants, and meeting records. Multilingual speech recognition supports automatic language detection and recognition of multiple languages.

Multilingual Speech Recognition Models (Optional):

Model Model Download Link Training Data Model Size Word Error Rate Introduction
whisper_large whisper_large 680kh 5.8G 2.7 (Librispeech) Whisper is a multilingual automatic speech recognition model developed by OpenAI, known for its high precision and robustness. It features an end-to-end architecture and can handle noisy audio environments, making it suitable for applications such as voice assistants and real-time subtitles.
whisper_medium whisper_medium 680kh 2.9G -
whisper_small whisper_small 680kh 923M -
whisper_base whisper_base 680kh 277M -
whisper_tiny whisper_tiny 680kh 145M -

2. Quick Start

PaddleX supports experiencing the multilingual speech recognition pipeline locally using the command line or Python.

Before using the multilingual speech recognition pipeline locally, please ensure that you have completed the installation of the PaddleX wheel package according to the PaddleX Local Installation Guide.

2.1 Local Experience

2.1.1 Command Line Experience

You can quickly experience the effect of the document image preprocessing pipeline with a single command. Use the example audio and replace --input with the local path for prediction.

paddlex --pipeline multilingual_speech_recognition \
        --input zh.wav \
        --save_path ./output \
        --device gpu:0

The relevant parameter descriptions can be found in the parameter descriptions in 2.1.2 Integration via Python Script.

After running, the result will be printed to the terminal, as follows:

{'input_path': 'zh.wav', 'result': {'text': '我认为跑步最重要的就是给我带来了身体健康', 'segments': [{'id': 0, 'seek': 0, 'start': 0.0, 'end': 2.0, 'text': '我认为跑步最重要的就是', 'tokens': [50364, 1654, 7422, 97, 13992, 32585, 31429, 8661, 24928, 1546, 5620, 50464, 50464, 49076, 4845, 99, 34912, 19847, 29485, 44201, 6346, 115, 50564], 'temperature': 0, 'avg_logprob': -0.22779104113578796, 'compression_ratio': 0.28169014084507044, 'no_speech_prob': 0.026114309206604958}, {'id': 1, 'seek': 200, 'start': 2.0, 'end': 31.0, 'text': '给我带来了身体健康', 'tokens': [50364, 49076, 4845, 99, 34912, 19847, 29485, 44201, 6346, 115, 51814], 'temperature': 0, 'avg_logprob': -0.21976988017559052, 'compression_ratio': 0.23684210526315788, 'no_speech_prob': 0.009023111313581467}], 'language': 'zh'}}

The explanation of the result parameters can refer to the result explanation in 2.1.2 Integration with Python Script.

2.1.2 Integration with Python Script

The above command line is for quickly experiencing and viewing the effect. Generally speaking, in a project, it is often necessary to integrate through code. You can complete the rapid inference of the pipeline with just a few lines of code. The inference code is as follows:

from paddlex import create_pipeline

pipeline = create_pipeline(pipeline="multilingual_speech_recognition")
output = pipeline.predict(input="zh.wav")

for res in output:
    res.print()
    res.save_to_json(save_path="./output/")

In the above Python script, the following steps are executed:

(1) The multilingual_speech_recognition pipeline object is instantiated through create_pipeline(). The specific parameter descriptions are as follows:

Parameter Parameter Description Parameter Type Default Value
pipeline The name of the pipeline or the path to the pipeline configuration file. If it is the pipeline name, it must be a pipeline supported by PaddleX. str None
device The inference device for the pipeline. It supports specifying the specific card number of the GPU, such as "gpu:0", the specific card number of other hardware, such as "npu:0", and the CPU, such as "cpu". str gpu:0

(2) The predict() method of the multilingual_speech_recognition pipeline object is called to perform inference and prediction. This method will return a generator. Below are the parameters and their descriptions for the predict() method:

Parameter Parameter Description Parameter Type Options Default Value
input Data to be predicted str
  • File path, such as the local path of an audio file: /root/data/audio.wav
  • URL link, such as the network URL of an audio file: Example
None
device The inference device for the pipeline str|None
  • CPU: such as cpu indicates using the CPU for inference;
  • GPU: such as gpu:0 indicates using the first GPU for inference;
  • NPU: such as npu:0 indicates using the first NPU for inference;
  • XPU: such as xpu:0 indicates using the first XPU for inference;
  • MLU: such as mlu:0 indicates using the first MLU for inference;
  • DCU: such as dcu:0 indicates using the first DCU for inference;
  • None: If set to None, the default value initialized for the pipeline will be used. During initialization, the local GPU device 0 will be prioritized. If it is not available, the CPU device will be used.
None

(3) Process the prediction results. The prediction result for each sample is of the dict type and supports operations such as printing, saving as an image, and saving as a json file:

Method Description Parameter Parameter Type Parameter Description Default Value
print() Print the result to the terminal format_json bool Whether to format the output content using JSON indentation True
indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True False
save_to_json() Save the result as a JSON file save_path str Path to save the file. When it is a directory, the saved file name is consistent with the input file type naming None
indent int Specify the indentation level to beautify the output JSON data, making it more readable. Effective only when format_json is True 4
ensure_ascii bool Control whether to escape non-ASCII characters to Unicode. When set to True, all non-ASCII characters will be escaped; False will retain the original characters. Effective only when format_json is True False
  • Calling the print() method will print the result to the terminal, with the printed content explained as follows:

    • input_path: The path where the input audio is stored
    • result: Recognition result
      • text: The text result of speech recognition
      • segments: The result text with timestamps
        • id: ID
        • seek: Audio segment pointer
        • start: Segment start time
        • end: Segment end time
        • text: Text recognized in the segment
        • tokens: Token IDs of the segment text
        • temperature: Speed variation ratio
        • avg_logprob: Average log probability
        • compression_ratio: Compression ratio
        • no_speech_prob: Non-speech probability
      • language: Recognized language
  • Calling the save_to_json() method will save the above content to the specified save_path. If specified as a directory, the saved path will be save_path/{your_audio_basename}.json; if specified as a file, it will be saved directly to that file. Since JSON files do not support saving numpy arrays, the numpy.array types will be converted to lists.

  • Additionally, it also supports obtaining visualized images and prediction results through attributes, as follows:

Attribute Attribute Description
json Get the predicted json format result
  • The prediction result obtained by the json attribute is a dict type of data, with content consistent with the content saved by calling the save_to_json() method.

In addition, you can obtain the multilingual_speech_recognition pipeline configuration file and load the configuration file for prediction. You can execute the following command to save the result in my_path:

paddlex --get_pipeline_config multilingual_speech_recognition --save_path ./my_path

If you have obtained the configuration file, you can customize the settings for the multilingual_speech_recognition pipeline. Simply modify the value of the pipeline parameter in the create_pipeline method to the path of the pipeline configuration file. An example is as follows:

For example, if your configuration file is saved at ./my_path/multilingual_speech_recognition.yaml, you just need to execute:

from paddlex import create_pipeline

pipeline = create_pipeline(pipeline="multilingual_speech_recognition")
output = pipeline.predict(input="zh.wav")

for res in output:
    res.print()
    res.save_to_json(save_path="./output/")

Note: The parameters in the configuration file are the initialization parameters for the pipeline. If you want to change the initialization parameters of the multilingual_speech_recognition pipeline, you can directly modify the parameters in the configuration file and load the configuration file for prediction. Additionally, CLI prediction also supports passing in a configuration file, simply specify the path of the configuration file with --pipeline.

Multilingual Service Call Examples
Python
import base64
import requests

API_URL = "http://localhost:8080/video-classification" # Service URL
video_path = "./demo.mp4"
output_video_path = "./out.mp4"

# Encode local video to Base64
with open(video_path, "rb") as file:
    video_bytes = file.read()
    video_data = base64.b64encode(video_bytes).decode("ascii")

payload = {"video": video_data}  # Base64 encoded file content or video URL

# Call API
response = requests.post(API_URL, json=payload)

# Process API response
assert response.status_code == 200
result = response.json()["result"]
with open(output_video_path, "wb") as file:
    file.write(base64.b64decode(result["video"]))
print(f"Output video saved at {output_video_path}")
print("\nCategories:")
print(result["categories"])
C++
#include <iostream>
#include "cpp-httplib/httplib.h" // https://github.com/Huiyicc/cpp-httplib 
#include "nlohmann/json.hpp" // https://github.com/nlohmann/json 
#include "base64.hpp" // https://github.com/tobiaslocker/base64 

int main() {
    httplib::Client client("localhost:8080");
    const std::string videoPath = "./demo.mp4";
    const std::string outputImagePath = "./out.mp4";

    httplib::Headers headers = {
        {"Content-Type", "application/json"}
    };

    // Encode local video to Base64
    std::ifstream file(videoPath, std::ios::binary | std::ios::ate);
    std::streamsize size = file.tellg();
    file.seekg(0, std::ios::beg);

    std::vector<char> buffer(size);
    if (!file.read(buffer.data(), size)) {
        std::cerr << "Error reading file." << std::endl;
        return 1;
    }
    std::string bufferStr(reinterpret_cast<const char*>(buffer.data()), buffer.size());
    std::string encodedImage = base64::to_base64(bufferStr);

    nlohmann::json jsonObj;
    jsonObj["video"] = encodedImage;
    std::string body = jsonObj.dump();

    // Call API
    auto response = client.Post("/video-classification", headers, body, "application/json");
    // Process API response
    if (response && response->status == 200) {
        nlohmann::json jsonResponse = nlohmann::json::parse(response->body);
        auto result = jsonResponse["result"];

        encodedImage = result["video"];
        std::string decodedString = base64::from_base64(encodedImage);
        std::vector<unsigned char> decodedImage(decodedString.begin(), decodedString.end());
        std::ofstream outputImage(outPutImagePath, std::ios::binary | std::ios::out);
        if (outputImage.is_open()) {
            outputImage.write(reinterpret_cast<char*>(decodedImage.data()), decodedImage.size());
            outputImage.close();
            std::cout << "Output video saved at " << outPutImagePath << std::endl;
        } else {
            std::cerr << "Unable to open file for writing: " << outPutImagePath << std::endl;
        }

        auto categories = result["categories"];
        std::cout << "\nCategories:" << std::endl;
        for (const auto& category : categories) {
            std::cout << category << std::endl;
        }
    } else {
        std::cout << "Failed to send HTTP request." << std::endl;
        return 1;
    }

    return 0;
}
Java
import okhttp3.*;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.node.ObjectNode;

import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.util.Base64;

public class Main {
    public static void main(String[] args) throws IOException {
        String API_URL = "http://localhost:8080/video-classification"; // Service URL
        String videoPath = "./demo.mp4"; // Local video
        String outputImagePath = "./out.mp4"; // Output video

        // Encode local video to Base64
        File file = new File(videoPath);
        byte[] fileContent = java.nio.file.Files.readAllBytes(file.toPath());
        String videoData = Base64.getEncoder().encodeToString(fileContent);

        ObjectMapper objectMapper = new ObjectMapper();
        ObjectNode params = objectMapper.createObjectNode();
        params.put("video", videoData); // Base64 encoded file content or video URL

        // Create OkHttpClient instance
        OkHttpClient client = new OkHttpClient();
        MediaType JSON = MediaType.Companion.get("application/json; charset=utf-8");
        RequestBody body = RequestBody.Companion.create(params.toString(), JSON);
        Request request = new Request.Builder()
                .url(API_URL)
                .post(body)
                .build();

        // Call API and process API response
        try (Response response = client.newCall(request).execute()) {
            if (response.isSuccessful()) {
                String responseBody = response.body().string();
                JsonNode resultNode = objectMapper.readTree(responseBody);
                JsonNode result = resultNode.get("result");
                String base64Image = result.get("video").asText();
                JsonNode categories = result.get("categories");

                byte[] videoBytes = Base64.getDecoder().decode(base64Image);
                try (FileOutputStream fos = new FileOutputStream(outputImagePath)) {
                    fos.write(videoBytes);
                }
                System.out.println("Output video saved at " + outputImagePath);
                System.out.println("\nCategories: " + categories.toString());
            } else {
                System.err.println("Request failed with code: " + response.code());
            }
        }
    }
}
Go
package main

import (
    "bytes"
    "encoding/base64"
    "encoding/json"
    "fmt"
    "io/ioutil"
    "net/http"
)

func main() {
    API_URL := "http://localhost:8080/video-classification"
    videoPath := "./demo.mp4"
    outputImagePath := "./out.mp4"

    // Base64 encode the local video
    videoBytes, err := ioutil.ReadFile(videoPath)
    if err != nil {
        fmt.Println("Error reading video file:", err)
        return
    }
    videoData := base64.StdEncoding.EncodeToString(videoBytes)

    payload := map[string]string{"video": videoData} // Base64 encoded file content or video URL
    payloadBytes, err := json.Marshal(payload)
    if err != nil {
        fmt.Println("Error marshaling payload:", err)
        return
    }

    // Call the API
    client := &http.Client{}
    req, err := http.NewRequest("POST", API_URL, bytes.NewBuffer(payloadBytes))
    if err != nil {
        fmt.Println("Error creating request:", err)
        return
    }

    res, err := client.Do(req)
    if err != nil {
        fmt.Println("Error sending request:", err)
        return
    }
    defer res.Body.Close()

    // Handle the API response
    body, err := ioutil.ReadAll(res.Body)
    if err != nil {
        fmt.Println("Error reading response body:", err)
        return
    }
    type Response struct {
        Result struct {
            Image      string   `json:"video"`
            Categories []map[string]interface{} `json:"categories"`
        } `json:"result"`
    }
    var respData Response
    err = json.Unmarshal([]byte(string(body)), &respData)
    if err != nil {
        fmt.Println("Error unmarshaling response body:", err)
        return
    }

    outputImageData, err := base64.StdEncoding.DecodeString(respData.Result.Image)
    if err != nil {
        fmt.Println("Error decoding base64 video data:", err)
        return
    }
    err = ioutil.WriteFile(outputImagePath, outputImageData, 0644)
    if err != nil {
        fmt.Println("Error writing video to file:", err)
        return
    }
    fmt.Printf("Image saved at %s.mp4\n", outputImagePath)
    fmt.Println("\nCategories:")
    for _, category := range respData.Result.Categories {
        fmt.Println(category)
    }
}
C#
using System;
using System.IO;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Text;
using System.Threading.Tasks;
using Newtonsoft.Json.Linq;

class Program
{
    static readonly string API_URL = "http://localhost:8080/video-classification";
    static readonly string videoPath = "./demo.mp4";
    static readonly string outputImagePath = "./out.mp4";

    static async Task Main(string[] args)
    {
        var httpClient = new HttpClient();

        // Base64 encode the local video
        byte[] videoBytes = File.ReadAllBytes(videoPath);
        string video_data = Convert.ToBase64String(videoBytes);

        var payload = new JObject{ { "video", video_data } }; // Base64 encoded file content or video URL
        var content = new StringContent(payload.ToString(), Encoding.UTF8, "application/json");

        // Call the API
        HttpResponseMessage response = await httpClient.PostAsync(API_URL, content);
        response.EnsureSuccessStatusCode();

        // Handle the API response
        string responseBody = await response.Content.ReadAsStringAsync();
        JObject jsonResponse = JObject.Parse(responseBody);

        string base64Image = jsonResponse["result"]["video"].ToString();
        byte[] outputImageBytes = Convert.FromBase64String(base64Image);

        File.WriteAllBytes(outputImagePath, outputImageBytes);
        Console.WriteLine($"Output video saved at {outputImagePath}");
        Console.WriteLine("\nCategories:");
        Console.WriteLine(jsonResponse["result"]["categories"].ToString());
    }
}
Node.js
const axios = require('axios');
const fs = require('fs');

const API_URL = 'http://localhost:8080/video-classification'
const videoPath = './demo.mp4'
const outputImagePath = "./out.mp4";

let config = {
   method: 'POST',
   maxBodyLength: Infinity,
   url: API_URL,
   data: JSON.stringify({
    'video': encodeImageToBase64(videoPath)  // Base64 encoded file content or video URL
  })
};

// Base64 encode the local video
function encodeImageToBase64(filePath) {
  const bitmap = fs.readFileSync(filePath);
  return Buffer.from(bitmap).toString('base64');
}

// Call the API
axios.request(config)
.then((response) => {
    // Process the API response
    const result = response.data["result"];
    const videoBuffer = Buffer.from(result["video"], 'base64');
    fs.writeFile(outputImagePath, videoBuffer, (err) => {
      if (err) throw err;
      console.log(`Output video saved at ${outputImagePath}`);
    });
    console.log("\nCategories:");
    console.log(result["categories"]);
})
.catch((error) => {
  console.log(error);
});
PHP
<?php

$API_URL = "http://localhost:8080/video-classification"; // Service URL
$video_path = "./demo.mp4";
$output_video_path = "./out.mp4";

// Base64 encode the local video
$video_data = base64_encode(file_get_contents($video_path));
$payload = array("video" => $video_data); // Base64 encoded file content or video URL

// Call the API
$ch = curl_init($API_URL);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, json_encode($payload));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);

// Process the API response
$result = json_decode($response, true)["result"];
file_put_contents($output_video_path, base64_decode($result["video"]));
echo "Output video saved at " . $output_video_path . "\n";
echo "\nCategories:\n";
print_r($result["categories"]);
?>


📱 Edge Deployment: Edge deployment is a method of placing computing and data processing capabilities directly on the user's device, allowing it to process data locally without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed procedures on edge deployment, please refer to the PaddleX Edge Deployment Guide. You can choose the appropriate method to deploy the model pipeline according to your needs and proceed with subsequent AI application integration.

3. Development Integration/Deployment

If the pipeline meets your requirements for inference speed and accuracy, you can directly proceed with development integration/deployment.

If you need to apply the pipeline directly in your Python project, you can refer to the example code in 2.2.2 Python Script Method.

In addition, PaddleX also provides three other deployment methods, which are detailed as follows:

🚀 High-Performance Inference: In actual production environments, many applications have strict performance requirements for deployment strategies, especially in terms of response speed, to ensure the efficient operation of the system and the smoothness of the user experience. To this end, PaddleX provides a high-performance inference plugin, which aims to deeply optimize the performance of model inference and pre/post-processing to achieve significant acceleration of the end-to-end process. For detailed high-performance inference procedures, please refer to the PaddleX High-Performance Inference Guide.

☁️ Service-Oriented Deployment: Service-oriented deployment is a common deployment form in actual production environments. By encapsulating inference functions as services, clients can access these services through network requests to obtain inference results. PaddleX supports multiple pipeline service-oriented deployment solutions. For detailed pipeline service-oriented deployment procedures, please refer to the PaddleX Service-Oriented Deployment Guide.

Below are the API references for basic service-oriented deployment and examples of multi-language service calls:

API Reference

For the main operations provided by the service:

  • The HTTP request method is POST.
  • Both the request body and response body are JSON data (JSON objects).
  • When the request is processed successfully, the response status code is 200, and the properties of the response body are as follows:
Name Type Meaning
logId string The UUID of the request.
errorCode integer Error code. Fixed as 0.
errorMsg string Error message. Fixed as "Success".
result object The result of the operation.
  • When the request is not processed successfully, the properties of the response body are as follows:
Name Type Meaning
logId string The UUID of the request.
errorCode integer Error code. Same as the response status code.
errorMsg string Error message.

The main operations provided by the service are as follows:

  • infer

Perform multilingual speech recognition on audio.

POST /multilingual-speech-recognition

  • The properties of the request body are as follows:
Name Type Meaning Required
audio string The URL or path of the audio file accessible by the server. Yes
  • When the request is processed successfully, the result of the response body has the following properties:
Name Type Meaning
text string The text result of speech recognition.
segments array The result text with timestamps.
language string The recognized language.

Each element in segments is an object with the following properties:

Name Type Meaning
id integer The ID of the audio segment.
seek integer The pointer of the audio segment.
start number The start time of the audio segment.
end number The end time of the audio segment.
text string The recognized text of the audio segment.
tokens array The token IDs of the audio segment.
temperature number The speed change ratio.
avgLogProb number The average log probability.
compressionRatio number The compression ratio.
noSpeechProb number The probability of no speech.
Example of Multilingual Service Invocation
Python

import requests

API_URL = "http://localhost:8080/multilingual-speech-recognition" # Service URL
audio_path = "./zh.wav"

payload = {"audio": audio_path}

# Invoke API
response = requests.post(API_URL, json=payload)

# Process API response
assert response.status_code == 200
result = response.json()["result"]
print(result)


📱 Edge Deployment: Edge deployment is a method that places computational and data processing capabilities directly on user devices, allowing them to process data without relying on remote servers. PaddleX supports deploying models on edge devices such as Android. For detailed procedures, please refer to the PaddleX Edge Deployment Guide. You can choose the appropriate deployment method based on your needs to integrate the model into your pipeline and proceed with subsequent AI application integration.

4. Secondary Development

If the default model weights provided by the general video classification pipeline are not satisfactory in terms of accuracy or speed for your specific scenario, you can attempt to fine-tune the existing model using your own domain-specific or application-specific data to improve the recognition performance of the general video classification pipeline in your scenario.

4.1 Model Fine-Tuning

Since the general video classification pipeline only includes a video classification module, if the performance of the pipeline is not up to expectations, you can analyze the videos with poor recognition and refer to the corresponding fine-tuning tutorial links in the table below for model fine-tuning.

Scenario Fine-Tuning Module Fine-Tuning Reference Link
Inaccurate video classification Video Classification Module Link

4.2 Model Application

After completing the fine-tuning with your private dataset, you will obtain the local model weight file.

If you need to use the fine-tuned model weights, simply modify the pipeline configuration file by replacing the path to the fine-tuned model weights with the corresponding location in the pipeline configuration file:

from paddlex import create_pipeline
pipeline = create_pipeline(pipeline="./my_path/multilingual_speech_recognition.yaml")
output = pipeline.predict(input="zh.wav")
for res in output:
    res.print()
    res.save_to_json("./output/")

Subsequently, refer to the command-line method or Python script method in the local experience to load the modified pipeline configuration file.

5. Multi-Hardware Support

PaddleX supports a variety of mainstream hardware devices, including NVIDIA GPU, Kunlunxin XPU, Ascend NPU, and Cambricon MLU. Simply modify the --device parameter to seamlessly switch between different hardware devices.

For example, if you use Ascend NPU for video classification in the pipeline, the Python command used is:

Comments