Utils.initializer(初始化) 模块¶
ppsci.utils.initializer
¶
The initialization method under this module is aligned with pytorch initialization. If you need to use the initialization method of PaddlePaddle, please refer to paddle.nn.initializer
This code is based on torch.nn.init Ths copyright of pytorch/pytorch is a BSD-style license, as found in the LICENSE file.
uniform_(tensor, a, b)
¶
Modify tensor inplace using uniform_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
a |
float
|
Min value. |
required |
b |
float
|
Max value. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.uniform_(param, -1, 1)
Source code in ppsci/utils/initializer.py
normal_(tensor, mean=0.0, std=1.0)
¶
Modify tensor inplace using normal_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
mean |
float
|
Mean value. Defaults to 0.0. |
0.0
|
std |
float
|
Std value. Defaults to 1.0. |
1.0
|
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.normal_(param, 0, 1)
Source code in ppsci/utils/initializer.py
trunc_normal_(tensor, mean=0.0, std=1.0, a=-2.0, b=2.0)
¶
Modify tensor inplace using trunc_normal_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
mean |
float
|
The mean of the normal distribution. Defaults to 0.0. |
0.0
|
std |
float
|
The standard deviation of the normal distribution. Defaults to 1.0. |
1.0
|
a |
float
|
The minimum cutoff value. Defaults to -2.0. |
-2.0
|
b |
float
|
The maximum cutoff value. Defaults to 2.0. |
2.0
|
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.trunc_normal_(param, 0.0, 1.0)
Source code in ppsci/utils/initializer.py
constant_(tensor, value=0.0)
¶
Modify tensor inplace using constant_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
value |
float
|
Value to fill tensor. Defaults to 0.0. |
0.0
|
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.constant_(param, 2)
Source code in ppsci/utils/initializer.py
ones_(tensor)
¶
Modify tensor inplace using ones_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.ones_(param)
Source code in ppsci/utils/initializer.py
zeros_(tensor)
¶
Modify tensor inplace using zeros_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.zeros_(param)
Source code in ppsci/utils/initializer.py
xavier_uniform_(tensor, gain=1.0, reverse=False)
¶
Modify tensor inplace using xavier_uniform_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
gain |
float
|
Hyperparameter. Defaults to 1.0. |
1.0
|
reverse |
bool
|
Tensor data format order, False by default as [fout, fin, ...].. Defaults to False. |
False
|
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.xavier_uniform_(param)
Source code in ppsci/utils/initializer.py
xavier_normal_(tensor, gain=1.0, reverse=False)
¶
Modify tensor inplace using xavier_normal_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
gain |
float
|
Hyperparameter. Defaults to 1.0. |
1.0
|
reverse |
bool
|
Tensor data format order, False by default as [fout, fin, ...]. Defaults to False. |
False
|
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.xavier_normal_(param)
Source code in ppsci/utils/initializer.py
kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu', reverse=False)
¶
Modify tensor inplace using kaiming_uniform method.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
a |
float
|
The negative slope of the rectifier used after this layer. Defaults to 0. |
0
|
mode |
Literal["fan_in", "fan_out"]
|
["fan_in", "fan_out"]. Defaults to "fan_in". |
'fan_in'
|
nonlinearity |
str
|
Nonlinearity method name. Defaults to "leaky_relu". |
'leaky_relu'
|
reverse |
bool
|
Tensor data format order, False by default as [fout, fin, ...].. Defaults to False. |
False
|
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.kaiming_uniform_(param)
Source code in ppsci/utils/initializer.py
kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu', reverse=False)
¶
Modify tensor inplace using kaiming_normal_.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
tensor |
Tensor
|
Paddle Tensor. |
required |
a |
float
|
The negative slope of the rectifier used after this layer. Defaults to 0. |
0
|
mode |
Literal["fan_in", "fan_out"]
|
Either 'fan_in' (default) or 'fan_out'. Defaults to "fan_in". |
'fan_in'
|
nonlinearity |
str
|
Nonlinearity method name. Defaults to "leaky_relu". |
'leaky_relu'
|
reverse |
bool
|
Tensor data format order. Defaults to False. |
False
|
Returns:
Type | Description |
---|---|
Tensor
|
paddle.Tensor: Initialized tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> param = paddle.empty((128, 256), "float32")
>>> param = ppsci.utils.initializer.kaiming_normal_(param)
Source code in ppsci/utils/initializer.py
linear_init_(module)
¶
Initialize module's weight and bias as it is a linear layer.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Layer
|
Linear Layer to be initialized. |
required |
Examples:
>>> import paddle
>>> import ppsci
>>> layer = paddle.nn.Linear(128, 256)
>>> ppsci.utils.initializer.linear_init_(layer)
Source code in ppsci/utils/initializer.py
conv_init_(module)
¶
Initialize module's weight and bias as it is a conv layer.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
module |
Layer
|
Convolution Layer to be initialized. |
required |
Examples:
>>> import paddle
>>> import ppsci
>>> layer = paddle.nn.Conv2D(4, 16, 2)
>>> ppsci.utils.initializer.conv_init_(layer)