Skip to content

Kunlunxin XPU

Requirements

  • OS: Linux
  • Python: 3.10
  • XPU Model: P800
  • XPU Driver Version: ≥ 5.0.21.10
  • XPU Firmware Version: ≥ 1.31

Verified platform: - CPU: INTEL(R) XEON(R) PLATINUM 8563C / Hygon C86-4G 7490 64-core Processor - Memory: 2T - Disk: 4T - OS: CentOS release 7.6 (Final) - Python: 3.10 - XPU Model: P800 (OAM Edition) - XPU Driver Version: 5.0.21.10 - XPU Firmware Version: 1.31

Note: Currently, only INTEL or Hygon CPU-based P800 (OAM Edition) servers have been verified. Other CPU types and P800 (PCIe Edition) servers have not been tested yet.

mkdir Work
cd Work
docker pull ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/fastdeploy-xpu:2.0.0
docker run --name fastdeploy-xpu --net=host -itd --privileged -v $PWD:/Work -w /Work \
    ccr-2vdh3abv-pub.cnc.bj.baidubce.com/paddlepaddle/fastdeploy-xpu:2.0.0 \
    /bin/bash
docker exec -it fastdeploy-xpu /bin/bash

2. Set up using pre-built wheels

Install PaddlePaddle

python -m pip install paddlepaddle-xpu==3.1.0 -i https://www.paddlepaddle.org.cn/packages/stable/xpu-p800/

Alternatively, you can install the latest version of PaddlePaddle (Not recommended)

python -m pip install --pre paddlepaddle-xpu -i https://www.paddlepaddle.org.cn/packages/nightly/xpu-p800/

Install FastDeploy (Do NOT install via PyPI source)

python -m pip install fastdeploy-xpu==2.0.0 -i https://www.paddlepaddle.org.cn/packages/stable/fastdeploy-xpu-p800/ --extra-index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple

Alternatively, you can install the latest version of FastDeploy (Not recommended)

python -m pip install --pre fastdeploy-xpu -i https://www.paddlepaddle.org.cn/packages/stable/fastdeploy-xpu-p800/ --extra-index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple

3. Build wheel from source

Install PaddlePaddle

python -m pip install paddlepaddle-xpu==3.1.0 -i https://www.paddlepaddle.org.cn/packages/stable/xpu-p800/

Alternatively, you can install the latest version of PaddlePaddle (Not recommended)

python -m pip install --pre paddlepaddle-xpu -i https://www.paddlepaddle.org.cn/packages/nightly/xpu-p800/

Download FastDeploy source code, checkout the stable branch/TAG

git clone https://github.com/PaddlePaddle/FastDeploy
git checkout <tag or branch>
cd FastDeploy

Download Kunlunxin Compilation Dependency

bash custom_ops/xpu_ops/src/download_dependencies.sh stable

Alternatively, you can download the latest versions of XTDK and XVLLM (Not recommended)

bash custom_ops/xpu_ops/src/download_dependencies.sh develop

Set environment variables,

export CLANG_PATH=$(pwd)/custom_ops/xpu_ops/src/third_party/xtdk
export XVLLM_PATH=$(pwd)/custom_ops/xpu_ops/src/third_party/xvllm

Compile and Install.

bash build.sh

The compiled outputs will be located in the FastDeploy/dist directory.

Installation verification

python -c "import paddle; paddle.version.show()"
python -c "import paddle; paddle.utils.run_check()"
python -c "from paddle.jit.marker import unified"
python -c "from fastdeploy.model_executor.ops.xpu import block_attn"

If all the above steps execute successfully, FastDeploy is installed correctly.

How to deploy services on Kunlunxin XPU

Refer to Supported Models and Service Deployment for the details about the supported models and the way to deploy services on Kunlunxin XPU.