AutoDiff(自动微分) 模块¶
ppsci.autodiff.ad
¶
This module is adapted from https://github.com/lululxvi/deepxde
jacobian: Callable[['paddle.Tensor', Union['paddle.Tensor', List['paddle.Tensor']], int, Optional[int], Optional[bool], bool], Union['paddle.Tensor', List['paddle.Tensor']]] = Jacobians()
module-attribute
¶
hessian: Callable[['paddle.Tensor', 'paddle.Tensor', Optional[int], int, int, Optional['paddle.Tensor'], Optional[bool], bool], 'paddle.Tensor'] = Hessians()
module-attribute
¶
Jacobians
¶
Compute multiple Jacobians.
A new instance will be created for a new pair of (output, input). For the (output, input) pair that has been computed before, it will reuse the previous instance, rather than creating a new one.
Source code in ppsci/autodiff/ad.py
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
|
__call__(ys, xs, i=0, j=None, retain_graph=None, create_graph=True)
¶
Compute jacobians for given ys and xs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ys |
Tensor
|
Output tensor. |
required |
xs |
Union[Tensor, List[Tensor]]
|
Input tensor(s). |
required |
i |
int
|
i-th output variable. Defaults to 0. |
0
|
j |
Optional[int]
|
j-th input variable. Defaults to None. |
None
|
retain_graph |
Optional[bool]
|
Whether to retain the forward graph which
is used to calculate the gradient. When it is True, the graph would
be retained, in which way users can calculate backward twice for the
same graph. When it is False, the graph would be freed. Default None,
which means it is equal to |
None
|
create_graph |
bool
|
Whether to create the gradient graphs of the computing process. When it is True, higher order derivatives are supported to compute; when it is False, the gradient graphs of the computing process would be discarded. Default False. |
True
|
Returns:
Type | Description |
---|---|
Union['paddle.Tensor', List['paddle.Tensor']]
|
paddle.Tensor: Jacobian matrix of ys[i] to xs[j]. |
Examples:
>>> import paddle
>>> import ppsci
>>> x = paddle.randn([4, 1])
>>> x.stop_gradient = False
>>> y = x * x
>>> dy_dx = ppsci.autodiff.jacobian(y, x)
>>> print(dy_dx.shape)
[4, 1]
Source code in ppsci/autodiff/ad.py
Hessians
¶
Compute multiple Hessians.
A new instance will be created for a new pair of (output, input). For the (output, input) pair that has been computed before, it will reuse the previous instance, rather than creating a new one.
Source code in ppsci/autodiff/ad.py
__call__(ys, xs, component=None, i=0, j=0, grad_y=None, retain_graph=None, create_graph=True)
¶
Compute hessian matrix for given ys and xs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
ys |
Tensor
|
Output tensor. |
required |
xs |
Tensor
|
Input tensor. |
required |
component |
Optional[int]
|
If |
None
|
i |
int
|
I-th input variable. Defaults to 0. |
0
|
j |
int
|
J-th input variable. Defaults to 0. |
0
|
grad_y |
Optional[Tensor]
|
The gradient of |
None
|
retain_graph |
Optional[bool]
|
Whether to retain the forward graph which
is used to calculate the gradient. When it is True, the graph would
be retained, in which way users can calculate backward twice for the
same graph. When it is False, the graph would be freed. Default None,
which means it is equal to |
None
|
create_graph |
bool
|
Whether to create the gradient graphs of the computing process. When it is True, higher order derivatives are supported to compute; when it is False, the gradient graphs of the computing process would be discarded. Default False. |
True
|
Returns:
Type | Description |
---|---|
'paddle.Tensor'
|
paddle.Tensor: Hessian matrix. |
Examples:
>>> import paddle
>>> import ppsci
>>> x = paddle.randn([4, 3])
>>> x.stop_gradient = False
>>> y = (x * x).sin()
>>> dy_dxx = ppsci.autodiff.hessian(y, x, component=0)
>>> print(dy_dxx.shape)
[4, 1]
Source code in ppsci/autodiff/ad.py
clear()
¶
Clear cached Jacobians and Hessians.
Examples:
>>> import paddle
>>> import ppsci
>>> x = paddle.randn([4, 3])
>>> x.stop_gradient = False
>>> y = (x * x).sin()
>>> dy_dxx = ppsci.autodiff.hessian(y, x, component=0)
>>> ppsci.autodiff.clear()
>>> print(ppsci.autodiff.hessian.Hs)
{}