Class CinnComputation
Defined in File computation.h
Nested Relationships
Nested Types
Class Documentation
-
class
cinn::frontend::CinnComputation Public Functions
-
std::vector<std::string>
GetAllTensorNames() get all variable names in the program
-
hlir::framework::Tensor
GetTensor(const std::string &name) get tensor by name
- Parameters
name: tensor name
-
std::vector<hlir::framework::Tensor>
GetInputTensors() get input tensors
-
std::vector<hlir::framework::Tensor>
GetOutputTensors() get output tensors
-
void
SetTensorData(hlir::framework::Tensor &t, void *data, size_t size) set the data of a tensor from user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.
- Parameters
t: the tensordata: address of the memory buffer to store tensor’s datasize: size of the memory buffer
-
void
SetTensorData(const std::string &tname, void *data, size_t size) set the data of a tensor (specified by it’s name) from user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.
- Parameters
tname: name of the tensordata: address of the memory buffer to store tensor’s datasize: size of the memory buffer
-
void
GetTensorData(hlir::framework::Tensor &t, void *data, size_t size) copy the data of a tensor to user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.
- Parameters
t: the tensordata: address of the memory buffer to store tensor’s datasize: size of the memory buffer
-
void
GetTensorData(const std::string &tname, void *data, size_t size) copy the data of a tensor (specified by it’s name) to user specified buffer. if tensor is in NVGPU device memory, cudaMemcpy is used.
- Parameters
tname: name of the tensordata: address of the memory buffer to store tensor’s datasize: size of the memory buffer
-
void
Execute(const std::map<std::string, cinn_pod_value_t> *name2podargs = nullptr) run the compiled program
Public Static Functions
-
CompileOptions
DefaultCompileOptions()
-
std::shared_ptr<CinnComputation>
BuildAndCompile(const Target &target, BaseBuilder &builder, const CompileOptions &options = DefaultCompileOptions(), const std::vector<Variable> &outputs = {}, void *stream = nullptr) build program from BaseBuilder, then compile it. BaseBuilder is normally NetBuilder or CINNBuilder.
- Return
shared_ptr pointing to CinnComputation instance
- Parameters
target: the target to run the programbuilder: program builder (NetBuilder or CINNBuilder)options: CompileOptions, config the compilation stepsoutputs: program output variables, if outputs is empty, then the output variable of the last instruction of the program is usedstream: CUDA stream, the value is meaningful only when target is NVGPU
-
std::shared_ptr<CinnComputation>
Compile(const Target &target, Program &program, const CompileOptions &options = DefaultCompileOptions(), const std::vector<Variable> &outputs = {}, void *stream = nullptr) compile the program
- Return
shared_ptr pointing to CinnComputation instance
- Parameters
target: the target to run the programprogram: program (usually generated by a Builder, or converted from Paddle model)options: CompileOptions, config the compilation stepsoutputs: program output variables, if outputs is empty, then the output variable of the last instruction of the program is usedstream: CUDA stream, the value is meaningful only when target is NVGpu
-
std::shared_ptr<CinnComputation>
CompilePaddleModel(const Target &target, const std::string &model_path, const std::vector<std::string> &input_names, const std::vector<hlir::framework::shape_t> &input_shapes, bool params_combined, const CompileOptions &options = DefaultCompileOptions(), void *stream = nullptr) convert a paddle model to program, then compile it.
- Return
shared_ptr pointing to CinnComputation instance
- Parameters
target: the target to run the programmodel_path: the path of the paddle modelinput_names: input variable names of paddle modelinput_shapes: input variable shapes of paddle modelparams_combined: whether params are stored combinedoptions: CompileOptions, config the compilation stepsstream: CUDA stream, the value is meaningful only when target is NVGpu
-
struct
CompileOptions: public CompileOptions
-
std::vector<std::string>