Class CinnComputation

Nested Relationships

Nested Types

Class Documentation

class cinn::frontend::CinnComputation

Public Functions

std::vector<std::string> GetAllTensorNames()

get all variable names in the program

hlir::framework::Tensor GetTensor(const std::string &name)

get tensor by name

Parameters
  • name: tensor name

std::vector<hlir::framework::Tensor> GetInputTensors()

get input tensors

std::vector<hlir::framework::Tensor> GetOutputTensors()

get output tensors

void SetTensorData(hlir::framework::Tensor &t, void *data, size_t size)

set the data of a tensor from user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.

Parameters
  • t: the tensor

  • data: address of the memory buffer to store tensor’s data

  • size: size of the memory buffer

void SetTensorData(const std::string &tname, void *data, size_t size)

set the data of a tensor (specified by it’s name) from user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.

Parameters
  • tname: name of the tensor

  • data: address of the memory buffer to store tensor’s data

  • size: size of the memory buffer

void GetTensorData(hlir::framework::Tensor &t, void *data, size_t size)

copy the data of a tensor to user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.

Parameters
  • t: the tensor

  • data: address of the memory buffer to store tensor’s data

  • size: size of the memory buffer

void GetTensorData(const std::string &tname, void *data, size_t size)

copy the data of a tensor (specified by it’s name) to user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.

Parameters
  • tname: name of the tensor

  • data: address of the memory buffer to store tensor’s data

  • size: size of the memory buffer

void Execute(const std::map<std::string, cinn_pod_value_t> *name2podargs = nullptr)

run the compiled program

Public Static Functions

CompileOptions DefaultCompileOptions()
std::shared_ptr<CinnComputation> BuildAndCompile(const Target &target, BaseBuilder &builder, const CompileOptions &options = DefaultCompileOptions(), const std::vector<Variable> &outputs = {}, void *stream = nullptr)

build program from BaseBuilder, then compile it. BaseBuilder is normally NetBuilder or CINNBuilder.

Return

shared_ptr pointing to CinnComputation instance

Parameters
  • target: the target to run the program

  • builder: program builder (NetBuilder or CINNBuilder)

  • options: CompileOptions, config the compilation steps

  • outputs: program output variables, if outputs is empty, then the output variable of the last instruction of the program is used

  • stream: CUDA stream, the value is meaningful only when target is NVGPU

std::shared_ptr<CinnComputation> Compile(const Target &target, Program &program, const CompileOptions &options = DefaultCompileOptions(), const std::vector<Variable> &outputs = {}, void *stream = nullptr)

compile the program

Return

shared_ptr pointing to CinnComputation instance

Parameters
  • target: the target to run the program

  • program: program (usually generated by a Builder, or converted from Paddle model)

  • options: CompileOptions, config the compilation steps

  • outputs: program output variables, if outputs is empty, then the output variable of the last instruction of the program is used

  • stream: CUDA stream, the value is meaningful only when target is NVGpu

std::shared_ptr<CinnComputation> CompilePaddleModel(const Target &target, const std::string &model_path, const std::vector<std::string> &input_names, const std::vector<hlir::framework::shape_t> &input_shapes, bool params_combined, const CompileOptions &options = DefaultCompileOptions(), void *stream = nullptr)

convert a paddle model to program, then compile it.

Return

shared_ptr pointing to CinnComputation instance

Parameters
  • target: the target to run the program

  • model_path: the path of the paddle model

  • input_names: input variable names of paddle model

  • input_shapes: input variable shapes of paddle model

  • params_combined: whether params are stored combined

  • options: CompileOptions, config the compilation steps

  • stream: CUDA stream, the value is meaningful only when target is NVGpu

struct CompileOptions : public CompileOptions

Public Members

bool use_decomposer = false
bool do_prerun = true
bool use_default_passes = true
std::vector<std::string> passes