Arch(网络模型) 模块¶
ppsci.arch
¶
Arch
¶
Bases: Layer
Base class for Network.
Source code in ppsci/arch/base.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 |
|
num_params: int
property
¶
Return number of parameters within network.
Returns:
Name | Type | Description |
---|---|---|
int |
int
|
Number of parameters. |
concat_to_tensor(data_dict, keys, axis=-1)
staticmethod
¶
Concatenate tensors from dict in the order of given keys.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_dict |
Dict[str, Tensor]
|
Dict contains tensor. |
required |
keys |
Tuple[str, ...]
|
Keys tensor fetched from. |
required |
axis |
int
|
Axis concatenate at. Defaults to -1. |
-1
|
Returns:
Type | Description |
---|---|
Tuple[Tensor, ...]
|
Tuple[paddle.Tensor, ...]: Concatenated tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.Arch()
>>> # fetch one tensor
>>> out = model.concat_to_tensor({'x':paddle.rand([64, 64, 1])}, ('x',))
>>> print(out.dtype, out.shape)
paddle.float32 [64, 64, 1]
>>> # fetch more tensors
>>> out = model.concat_to_tensor(
... {'x1':paddle.rand([64, 64, 1]), 'x2':paddle.rand([64, 64, 1])},
... ('x1', 'x2'),
... axis=2)
>>> print(out.dtype, out.shape)
paddle.float32 [64, 64, 2]
Source code in ppsci/arch/base.py
freeze()
¶
Freeze all parameters.
Examples:
>>> import ppsci
>>> model = ppsci.arch.Arch()
>>> # freeze all parameters and make model `eval`
>>> model.freeze()
>>> assert not model.training
>>> for p in model.parameters():
... assert p.stop_gradient
Source code in ppsci/arch/base.py
register_input_transform(transform)
¶
Register input transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transform |
Callable[[Dict[str, Tensor]], Dict[str, Tensor]]
|
Input transform of network, receive a single tensor dict and return a single tensor dict. |
required |
Examples:
>>> import ppsci
>>> def transform_in(in_):
... x = in_["x"]
... # transform input
... x_ = 2.0 * x
... input_trans = {"2x": x_}
... return input_trans
>>> # `MLP` inherits from `Arch`
>>> model = ppsci.arch.MLP(
... input_keys=("2x",),
... output_keys=("y",),
... num_layers=5,
... hidden_size=32)
>>> model.register_input_transform(transform_in)
>>> out = model({"x":paddle.rand([64, 64, 1])})
>>> for k, v in out.items():
... print(f"{k} {v.dtype} {v.shape}")
y paddle.float32 [64, 64, 1]
Source code in ppsci/arch/base.py
register_output_transform(transform)
¶
Register output transform.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
transform |
Callable[[Dict[str, Tensor], Dict[str, Tensor]], Dict[str, Tensor]]
|
Output transform of network, receive two single tensor dict(raw input and raw output) and return a single tensor dict(transformed output). |
required |
Examples:
>>> import ppsci
>>> def transform_out(in_, out):
... x = in_["x"]
... y = out["y"]
... u = 2.0 * x * y
... output_trans = {"u": u}
... return output_trans
>>> # `MLP` inherits from `Arch`
>>> model = ppsci.arch.MLP(
... input_keys=("x",),
... output_keys=("y",),
... num_layers=5,
... hidden_size=32)
>>> model.register_output_transform(transform_out)
>>> out = model({"x":paddle.rand([64, 64, 1])})
>>> for k, v in out.items():
... print(f"{k} {v.dtype} {v.shape}")
u paddle.float32 [64, 64, 1]
Source code in ppsci/arch/base.py
split_to_dict(data_tensor, keys, axis=-1)
staticmethod
¶
Split tensor and wrap into a dict by given keys.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_tensor |
Tensor
|
Tensor to be split. |
required |
keys |
Tuple[str, ...]
|
Keys tensor mapping to. |
required |
axis |
int
|
Axis split at. Defaults to -1. |
-1
|
Returns:
Type | Description |
---|---|
Dict[str, Tensor]
|
Dict[str, paddle.Tensor]: Dict contains tensor. |
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.Arch()
>>> # split one tensor
>>> out = model.split_to_dict(paddle.rand([64, 64, 1]), ('x',))
>>> for k, v in out.items():
... print(f"{k} {v.dtype} {v.shape}")
x paddle.float32 [64, 64, 1]
>>> # split more tensors
>>> out = model.split_to_dict(paddle.rand([64, 64, 2]), ('x1', 'x2'), axis=2)
>>> for k, v in out.items():
... print(f"{k} {v.dtype} {v.shape}")
x1 paddle.float32 [64, 64, 1]
x2 paddle.float32 [64, 64, 1]
Source code in ppsci/arch/base.py
unfreeze()
¶
Unfreeze all parameters.
Examples:
>>> import ppsci
>>> model = ppsci.arch.Arch()
>>> # unfreeze all parameters and make model `train`
>>> model.unfreeze()
>>> assert model.training
>>> for p in model.parameters():
... assert not p.stop_gradient
Source code in ppsci/arch/base.py
AMGNet
¶
Bases: Layer
A Multi-scale Graph neural Network model based on Encoder-Process-Decoder structure for flow field prediction.
https://doi.org/10.1080/09540091.2022.2131737
Code reference: https://github.com/baoshiaijhin/amgnet
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input", ). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("pred", ). |
required |
input_dim |
int
|
Number of input dimension. |
required |
output_dim |
int
|
Number of output dimension. |
required |
latent_dim |
int
|
Number of hidden(feature) dimension. |
required |
num_layers |
int
|
Number of layer(s). |
required |
message_passing_aggregator |
Literal['sum']
|
Message aggregator method in graph. Only "sum" available now. |
required |
message_passing_steps |
int
|
Message passing steps in graph. |
required |
speed |
str
|
Whether use vanilla method or fast method for graph_connectivity computation. |
required |
Examples:
>>> import ppsci
>>> model = ppsci.arch.AMGNet(
... ("input", ), ("pred", ), 5, 3, 64, 2, "sum", 6, "norm",
... )
Source code in ppsci/arch/amgnet.py
561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 |
|
MLP
¶
Bases: Arch
Multi layer perceptron network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("x", "y", "z"). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("u", "v", "w"). |
required |
num_layers |
int
|
Number of hidden layers. |
required |
hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size. An integer for all layers, or list of integer specify each layer's size. |
required |
activation |
str
|
Name of activation function. Defaults to "tanh". |
'tanh'
|
skip_connection |
bool
|
Whether to use skip connection. Defaults to False. |
False
|
weight_norm |
bool
|
Whether to apply weight norm on parameter(s). Defaults to False. |
False
|
input_dim |
Optional[int]
|
Number of input's dimension. Defaults to None. |
None
|
output_dim |
Optional[int]
|
Number of output's dimension. Defaults to None. |
None
|
periods |
Optional[Dict[int, Tuple[float, bool]]]
|
Period of each input key, input in given channel will be period embeded if specified, each tuple of periods list is [period, trainable]. Defaults to None. |
None
|
fourier |
Optional[Dict[str, Union[float, int]]]
|
Random fourier feature embedding, e.g. {'dim': 256, 'scale': 1.0}. Defaults to None. |
None
|
random_weight |
Optional[Dict[str, float]]
|
Mean and std of random weight factorization layer, e.g. {"mean": 0.5, "std: 0.1"}. Defaults to None. |
None
|
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.MLP(
... input_keys=("x", "y"),
... output_keys=("u", "v"),
... num_layers=5,
... hidden_size=128
... )
>>> input_dict = {"x": paddle.rand([64, 1]),
... "y": paddle.rand([64, 1])}
>>> output_dict = model(input_dict)
>>> print(output_dict["u"].shape)
[64, 1]
>>> print(output_dict["v"].shape)
[64, 1]
Source code in ppsci/arch/mlp.py
139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 |
|
ModifiedMLP
¶
Bases: Arch
Modified Multi layer perceptron network.
Understanding and mitigating gradient pathologies in physics-informed neural networks. https://arxiv.org/pdf/2001.04536.pdf.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("x", "y", "z"). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("u", "v", "w"). |
required |
num_layers |
int
|
Number of hidden layers. |
required |
hidden_size |
int
|
Number of hidden size, an integer for all layers. |
required |
activation |
str
|
Name of activation function. Defaults to "tanh". |
'tanh'
|
skip_connection |
bool
|
Whether to use skip connection. Defaults to False. |
False
|
weight_norm |
bool
|
Whether to apply weight norm on parameter(s). Defaults to False. |
False
|
input_dim |
Optional[int]
|
Number of input's dimension. Defaults to None. |
None
|
output_dim |
Optional[int]
|
Number of output's dimension. Defaults to None. |
None
|
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.ModifiedMLP(
... input_keys=("x", "y"),
... output_keys=("u", "v"),
... num_layers=5,
... hidden_size=128
... )
>>> input_dict = {"x": paddle.rand([64, 1]),
... "y": paddle.rand([64, 1])}
>>> output_dict = model(input_dict)
>>> print(output_dict["u"].shape)
[64, 1]
>>> print(output_dict["v"].shape)
[64, 1]
Source code in ppsci/arch/mlp.py
318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 |
|
PirateNet
¶
Bases: Arch
PirateNet.
PIRATENETS: PHYSICS-INFORMED DEEP LEARNING WITHRESIDUAL ADAPTIVE NETWORKS
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("x", "y", "z"). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("u", "v", "w"). |
required |
num_blocks |
int
|
Number of PirateBlocks. |
required |
hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size. An integer for all layers, or list of integer specify each layer's size. |
required |
activation |
str
|
Name of activation function. Defaults to "tanh". |
'tanh'
|
weight_norm |
bool
|
Whether to apply weight norm on parameter(s). Defaults to False. |
False
|
input_dim |
Optional[int]
|
Number of input's dimension. Defaults to None. |
None
|
output_dim |
Optional[int]
|
Number of output's dimension. Defaults to None. |
None
|
periods |
Optional[Dict[int, Tuple[float, bool]]]
|
Period of each input key, input in given channel will be period embeded if specified, each tuple of periods list is [period, trainable]. Defaults to None. |
None
|
fourier |
Optional[Dict[str, Union[float, int]]]
|
Random fourier feature embedding, e.g. {'dim': 256, 'scale': 1.0}. Defaults to None. |
None
|
random_weight |
Optional[Dict[str, float]]
|
Mean and std of random weight factorization layer, e.g. {"mean": 0.5, "std: 0.1"}. Defaults to None. |
None
|
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.PirateNet(
... input_keys=("x", "y"),
... output_keys=("u", "v"),
... num_blocks=3,
... hidden_size=256,
... fourier={'dim': 256, 'scale': 1.0},
... )
>>> input_dict = {"x": paddle.rand([64, 1]),
... "y": paddle.rand([64, 1])}
>>> output_dict = model(input_dict)
>>> print(output_dict["u"].shape)
[64, 1]
>>> print(output_dict["v"].shape)
[64, 1]
Source code in ppsci/arch/mlp.py
612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 |
|
DeepONet
¶
Bases: Arch
Deep operator network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
u_key |
str
|
Name of function data for input function u(x). |
required |
y_key |
str
|
Name of location data for input function G(u). |
required |
G_key |
str
|
Output name of predicted G(u)(y). |
required |
num_loc |
int
|
Number of sampled u(x), i.e. |
required |
num_features |
int
|
Number of features extracted from u(x), same for y. |
required |
branch_num_layers |
int
|
Number of hidden layers of branch net. |
required |
trunk_num_layers |
int
|
Number of hidden layers of trunk net. |
required |
branch_hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size of branch net. An integer for all layers, or list of integer specify each layer's size. |
required |
trunk_hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size of trunk net. An integer for all layers, or list of integer specify each layer's size. |
required |
branch_skip_connection |
bool
|
Whether to use skip connection for branch net. Defaults to False. |
False
|
trunk_skip_connection |
bool
|
Whether to use skip connection for trunk net. Defaults to False. |
False
|
branch_activation |
str
|
Name of activation function. Defaults to "tanh". |
'tanh'
|
trunk_activation |
str
|
Name of activation function. Defaults to "tanh". |
'tanh'
|
branch_weight_norm |
bool
|
Whether to apply weight norm on parameter(s) for branch net. Defaults to False. |
False
|
trunk_weight_norm |
bool
|
Whether to apply weight norm on parameter(s) for trunk net. Defaults to False. |
False
|
use_bias |
bool
|
Whether to add bias on predicted G(u)(y). Defaults to True. |
True
|
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.DeepONet(
... "u", "y", "G",
... 100, 40,
... 1, 1,
... 40, 40,
... branch_activation="relu", trunk_activation="relu",
... use_bias=True,
... )
>>> input_dict = {"u": paddle.rand([200, 100]),
... "y": paddle.rand([200, 1])}
>>> output_dict = model(input_dict)
>>> print(output_dict["G"].shape)
[200, 1]
Source code in ppsci/arch/deeponet.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 |
|
DeepPhyLSTM
¶
Bases: Arch
DeepPhyLSTM init function.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_size |
int
|
The input size. |
required |
output_size |
int
|
The output size. |
required |
hidden_size |
int
|
The hidden size. Defaults to 100. |
100
|
model_type |
int
|
The model type, value is 2 or 3, 2 indicates having two sub-models, 3 indicates having three submodels. Defaults to 2. |
2
|
Examples:
>>> import paddle
>>> import ppsci
>>> # model_type is `2`
>>> model = ppsci.arch.DeepPhyLSTM(
... input_size=16,
... output_size=1,
... hidden_size=100,
... model_type=2)
>>> out = model(
... {"ag":paddle.rand([64, 16, 16]),
... "ag_c":paddle.rand([64, 16, 16]),
... "phi":paddle.rand([1, 16, 16])})
>>> for k, v in out.items():
... print(f"{k} {v.dtype} {v.shape}")
eta_pred paddle.float32 [64, 16, 1]
eta_dot_pred paddle.float32 [64, 16, 1]
g_pred paddle.float32 [64, 16, 1]
eta_t_pred_c paddle.float32 [64, 16, 1]
eta_dot_pred_c paddle.float32 [64, 16, 1]
lift_pred_c paddle.float32 [64, 16, 1]
>>> # model_type is `3`
>>> model = ppsci.arch.DeepPhyLSTM(
... input_size=16,
... output_size=1,
... hidden_size=100,
... model_type=3)
>>> out = model(
... {"ag":paddle.rand([64, 16, 1]),
... "ag_c":paddle.rand([64, 16, 1]),
... "phi":paddle.rand([1, 16, 16])})
>>> for k, v in out.items():
... print(f"{k} {v.dtype} {v.shape}")
eta_pred paddle.float32 [64, 16, 1]
eta_dot_pred paddle.float32 [64, 16, 1]
g_pred paddle.float32 [64, 16, 1]
eta_t_pred_c paddle.float32 [64, 16, 1]
eta_dot_pred_c paddle.float32 [64, 16, 1]
lift_pred_c paddle.float32 [64, 16, 1]
g_t_pred_c paddle.float32 [64, 16, 1]
g_dot_pred_c paddle.float32 [64, 16, 1]
Source code in ppsci/arch/phylstm.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 |
|
LorenzEmbedding
¶
Bases: Arch
Embedding Koopman model for the Lorenz ODE system.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Input keys, such as ("states",). |
required |
output_keys |
Tuple[str, ...]
|
Output keys, such as ("pred_states", "recover_states"). |
required |
mean |
Optional[Tuple[float, ...]]
|
Mean of training dataset. Defaults to None. |
None
|
std |
Optional[Tuple[float, ...]]
|
Standard Deviation of training dataset. Defaults to None. |
None
|
input_size |
int
|
Size of input data. Defaults to 3. |
3
|
hidden_size |
int
|
Number of hidden size. Defaults to 500. |
500
|
embed_size |
int
|
Number of embedding size. Defaults to 32. |
32
|
drop |
float
|
Probability of dropout the units. Defaults to 0.0. |
0.0
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.LorenzEmbedding(
... input_keys=("x", "y"),
... output_keys=("u", "v"),
... input_size=3,
... hidden_size=500,
... embed_size=32,
... drop=0.0,
... mean=None,
... std=None,
... )
>>> x_shape = [8, 3, 2]
>>> y_shape = [8, 3, 1]
>>> input_dict = {"x": paddle.rand(x_shape),
... "y": paddle.rand(y_shape)}
>>> output_dict = model(input_dict)
>>> print(output_dict["u"].shape)
[8, 2, 3]
>>> print(output_dict["v"].shape)
[8, 3, 3]
Source code in ppsci/arch/embedding_koopman.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 |
|
RosslerEmbedding
¶
Bases: LorenzEmbedding
Embedding Koopman model for the Rossler ODE system.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Input keys, such as ("states",). |
required |
output_keys |
Tuple[str, ...]
|
Output keys, such as ("pred_states", "recover_states"). |
required |
mean |
Optional[Tuple[float, ...]]
|
Mean of training dataset. Defaults to None. |
None
|
std |
Optional[Tuple[float, ...]]
|
Standard Deviation of training dataset. Defaults to None. |
None
|
input_size |
int
|
Size of input data. Defaults to 3. |
3
|
hidden_size |
int
|
Number of hidden size. Defaults to 500. |
500
|
embed_size |
int
|
Number of embedding size. Defaults to 32. |
32
|
drop |
float
|
Probability of dropout the units. Defaults to 0.0. |
0.0
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.RosslerEmbedding(
... input_keys=("x", "y"),
... output_keys=("u", "v"),
... input_size=3,
... hidden_size=500,
... embed_size=32,
... drop=0.0,
... mean=None,
... std=None,
... )
>>> x_shape = [8, 3, 2]
>>> y_shape = [8, 3, 1]
>>> input_dict = {"x": paddle.rand(x_shape),
... "y": paddle.rand(y_shape)}
>>> output_dict = model(input_dict)
>>> print(output_dict["u"].shape)
[8, 2, 3]
>>> print(output_dict["v"].shape)
[8, 3, 3]
Source code in ppsci/arch/embedding_koopman.py
CylinderEmbedding
¶
Bases: Arch
Embedding Koopman model for the Cylinder system.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Input keys, such as ("states", "visc"). |
required |
output_keys |
Tuple[str, ...]
|
Output keys, such as ("pred_states", "recover_states"). |
required |
mean |
Optional[Tuple[float, ...]]
|
Mean of training dataset. Defaults to None. |
None
|
std |
Optional[Tuple[float, ...]]
|
Standard Deviation of training dataset. Defaults to None. |
None
|
embed_size |
int
|
Number of embedding size. Defaults to 128. |
128
|
encoder_channels |
Optional[Tuple[int, ...]]
|
Number of channels in encoder network. Defaults to None. |
None
|
decoder_channels |
Optional[Tuple[int, ...]]
|
Number of channels in decoder network. Defaults to None. |
None
|
drop |
float
|
Probability of dropout the units. Defaults to 0.0. |
0.0
|
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.CylinderEmbedding(("states", "visc"), ("pred_states", "recover_states"))
>>> states_shape = [32, 10, 3, 64, 128]
>>> visc_shape = [32, 1]
>>> input_dict = {"states" : paddle.rand(states_shape),
... "visc" : paddle.rand(visc_shape)}
>>> out_dict = model(input_dict)
>>> print(out_dict["pred_states"].shape)
[32, 9, 3, 64, 128]
>>> print(out_dict["recover_states"].shape)
[32, 10, 3, 64, 128]
Source code in ppsci/arch/embedding_koopman.py
274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 |
|
Generator
¶
Bases: Arch
Generator Net of GAN. Attention, the net using a kind of variant of ResBlock which is unique to "tempoGAN" example but not an open source network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input1", "input2"). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output1", "output2"). |
required |
in_channel |
int
|
Number of input channels of the first conv layer. |
required |
out_channels_tuple |
Tuple[Tuple[int, ...], ...]
|
Number of output channels of all conv layers, such as [[out_res0_conv0, out_res0_conv1], [out_res1_conv0, out_res1_conv1]] |
required |
kernel_sizes_tuple |
Tuple[Tuple[int, ...], ...]
|
Number of kernel_size of all conv layers, such as [[kernel_size_res0_conv0, kernel_size_res0_conv1], [kernel_size_res1_conv0, kernel_size_res1_conv1]] |
required |
strides_tuple |
Tuple[Tuple[int, ...], ...]
|
Number of stride of all conv layers, such as [[stride_res0_conv0, stride_res0_conv1], [stride_res1_conv0, stride_res1_conv1]] |
required |
use_bns_tuple |
Tuple[Tuple[bool, ...], ...]
|
Whether to use the batch_norm layer after each conv layer. |
required |
acts_tuple |
Tuple[Tuple[str, ...], ...]
|
Whether to use the activation layer after each conv layer. If so, witch activation to use, such as [[act_res0_conv0, act_res0_conv1], [act_res1_conv0, act_res1_conv1]] |
required |
Examples:
>>> import ppsci
>>> in_channel = 1
>>> rb_channel0 = (2, 8, 8)
>>> rb_channel1 = (128, 128, 128)
>>> rb_channel2 = (32, 8, 8)
>>> rb_channel3 = (2, 1, 1)
>>> out_channels_tuple = (rb_channel0, rb_channel1, rb_channel2, rb_channel3)
>>> kernel_sizes_tuple = (((5, 5), ) * 2 + ((1, 1), ), ) * 4
>>> strides_tuple = ((1, 1, 1), ) * 4
>>> use_bns_tuple = ((True, True, True), ) * 3 + ((False, False, False), )
>>> acts_tuple = (("relu", None, None), ) * 4
>>> model = ppsci.arch.Generator(("in",), ("out",), in_channel, out_channels_tuple, kernel_sizes_tuple, strides_tuple, use_bns_tuple, acts_tuple)
>>> batch_size = 4
>>> height = 64
>>> width = 64
>>> input_data = paddle.randn([batch_size, in_channel, height, width])
>>> input_dict = {'in': input_data}
>>> output_data = model(input_dict)
>>> print(output_data['out'].shape)
[4, 1, 64, 64]
Source code in ppsci/arch/gan.py
154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 |
|
Discriminator
¶
Bases: Arch
Discriminator Net of GAN.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input1", "input2"). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output1", "output2"). |
required |
in_channel |
int
|
Number of input channels of the first conv layer. |
required |
out_channels |
Tuple[int, ...]
|
Number of output channels of all conv layers, such as (out_conv0, out_conv1, out_conv2). |
required |
fc_channel |
int
|
Number of input features of linear layer. Number of output features of the layer is set to 1 in this Net to construct a fully_connected layer. |
required |
kernel_sizes |
Tuple[int, ...]
|
Number of kernel_size of all conv layers, such as (kernel_size_conv0, kernel_size_conv1, kernel_size_conv2). |
required |
strides |
Tuple[int, ...]
|
Number of stride of all conv layers, such as (stride_conv0, stride_conv1, stride_conv2). |
required |
use_bns |
Tuple[bool, ...]
|
Whether to use the batch_norm layer after each conv layer. |
required |
acts |
Tuple[str, ...]
|
Whether to use the activation layer after each conv layer. If so, witch activation to use, such as (act_conv0, act_conv1, act_conv2). |
required |
Examples:
>>> import ppsci
>>> in_channel = 2
>>> in_channel_tempo = 3
>>> out_channels = (32, 64, 128, 256)
>>> fc_channel = 65536
>>> kernel_sizes = ((4, 4), (4, 4), (4, 4), (4, 4))
>>> strides = (2, 2, 2, 1)
>>> use_bns = (False, True, True, True)
>>> acts = ("leaky_relu", "leaky_relu", "leaky_relu", "leaky_relu", None)
>>> output_keys_disc = ("out_1", "out_2", "out_3", "out_4", "out_5", "out_6", "out_7", "out_8", "out_9", "out_10")
>>> model = ppsci.arch.Discriminator(("in_1","in_2"), output_keys_disc, in_channel, out_channels, fc_channel, kernel_sizes, strides, use_bns, acts)
>>> input_data = [paddle.to_tensor(paddle.randn([1, in_channel, 128, 128])),paddle.to_tensor(paddle.randn([1, in_channel, 128, 128]))]
>>> input_dict = {"in_1": input_data[0],"in_2": input_data[1]}
>>> out_dict = model(input_dict)
>>> for k, v in out_dict.items():
... print(k, v.shape)
out_1 [1, 32, 64, 64]
out_2 [1, 64, 32, 32]
out_3 [1, 128, 16, 16]
out_4 [1, 256, 16, 16]
out_5 [1, 1]
out_6 [1, 32, 64, 64]
out_7 [1, 64, 32, 32]
out_8 [1, 128, 16, 16]
out_9 [1, 256, 16, 16]
out_10 [1, 1]
Source code in ppsci/arch/gan.py
258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 |
|
split_to_dict(data_list, keys)
staticmethod
¶
Overwrite of split_to_dict() method belongs to Class base.Arch.
Reason for overwriting is there is no concat_to_tensor() method called in "tempoGAN" example. That is because input in "tempoGAN" example is not in a regular format, but a format like: { "input1": paddle.concat([in1, in2], axis=1), "input2": paddle.concat([in1, in3], axis=1), }
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data_list |
List[Tensor]
|
The data to be split. It should be a list of tensor(s), but not a paddle.Tensor. |
required |
keys |
Tuple[str, ...]
|
Keys of outputs. |
required |
Returns:
Type | Description |
---|---|
Dict[str, Tensor]
|
Dict[str, paddle.Tensor]: Dict with split data. |
Source code in ppsci/arch/gan.py
PhysformerGPT2
¶
Bases: Arch
Transformer decoder model for modeling physics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Input keys, such as ("embeds",). |
required |
output_keys |
Tuple[str, ...]
|
Output keys, such as ("pred_embeds",). |
required |
num_layers |
int
|
Number of transformer layers. |
required |
num_ctx |
int
|
Context length of block. |
required |
embed_size |
int
|
The number of embedding size. |
required |
num_heads |
int
|
The number of heads in multi-head attention. |
required |
embd_pdrop |
float
|
The dropout probability used on embedding features. Defaults to 0.0. |
0.0
|
attn_pdrop |
float
|
The dropout probability used on attention weights. Defaults to 0.0. |
0.0
|
resid_pdrop |
float
|
The dropout probability used on block outputs. Defaults to 0.0. |
0.0
|
initializer_range |
float
|
Initializer range of linear layer. Defaults to 0.05. |
0.05
|
embedding_model |
Optional[Arch]
|
Embedding model, If this parameter is set, the embedding model will map the input data to the embedding space and the output data to the physical space. Defaults to None. |
None
|
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.PhysformerGPT2(("embeds", ), ("pred_embeds", ), 6, 16, 128, 4)
>>> data = paddle.to_tensor(paddle.randn([10, 16, 128]))
>>> inputs = {"embeds": data}
>>> outputs = model(inputs)
>>> print(outputs["pred_embeds"].shape)
[10, 16, 128]
Source code in ppsci/arch/physx_transformer.py
241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 |
|
ModelList
¶
Bases: Arch
ModelList layer which wrap more than one model that shares inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_list |
Tuple[Arch, ...]
|
Model(s) nested in tuple. |
required |
Examples:
>>> import paddle
>>> import ppsci
>>> model1 = ppsci.arch.MLP(("x", "y"), ("u", "v"), 10, 128)
>>> model2 = ppsci.arch.MLP(("x", "y"), ("w", "p"), 5, 128)
>>> model = ppsci.arch.ModelList((model1, model2))
>>> input_dict = {"x": paddle.rand([64, 64, 1]),"y": paddle.rand([64, 64, 1])}
>>> output_dict = model(input_dict)
>>> for k, v in output_dict.items():
... print(k, v.shape)
u [64, 64, 1]
v [64, 64, 1]
w [64, 64, 1]
p [64, 64, 1]
Source code in ppsci/arch/model_list.py
AFNONet
¶
Bases: Arch
Adaptive Fourier Neural Network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
img_size |
Tuple[int, ...]
|
Image size. Defaults to (720, 1440). |
(720, 1440)
|
patch_size |
Tuple[int, ...]
|
Path. Defaults to (8, 8). |
(8, 8)
|
in_channels |
int
|
The input tensor channels. Defaults to 20. |
20
|
out_channels |
int
|
The output tensor channels. Defaults to 20. |
20
|
embed_dim |
int
|
The embedding dimension for PatchEmbed. Defaults to 768. |
768
|
depth |
int
|
Number of transformer depth. Defaults to 12. |
12
|
mlp_ratio |
float
|
Number of ratio used in MLP. Defaults to 4.0. |
4.0
|
drop_rate |
float
|
The drop ratio used in MLP. Defaults to 0.0. |
0.0
|
drop_path_rate |
float
|
The drop ratio used in DropPath. Defaults to 0.0. |
0.0
|
num_blocks |
int
|
Number of blocks. Defaults to 8. |
8
|
sparsity_threshold |
float
|
The value of threshold for softshrink. Defaults to 0.01. |
0.01
|
hard_thresholding_fraction |
float
|
The value of threshold for keep mode. Defaults to 1.0. |
1.0
|
num_timestamps |
int
|
Number of timestamp. Defaults to 1. |
1
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.AFNONet(("input", ), ("output", ))
>>> input_data = {"input": paddle.randn([1, 20, 720, 1440])}
>>> output_data = model(input_data)
>>> for k, v in output_data.items():
... print(k, v.shape)
output [1, 20, 720, 1440]
Source code in ppsci/arch/afno.py
394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 |
|
PrecipNet
¶
Bases: Arch
Precipitation Network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
wind_model |
Arch
|
Wind model. |
required |
img_size |
Tuple[int, ...]
|
Image size. Defaults to (720, 1440). |
(720, 1440)
|
patch_size |
Tuple[int, ...]
|
Path. Defaults to (8, 8). |
(8, 8)
|
in_channels |
int
|
The input tensor channels. Defaults to 20. |
20
|
out_channels |
int
|
The output tensor channels. Defaults to 1. |
1
|
embed_dim |
int
|
The embedding dimension for PatchEmbed. Defaults to 768. |
768
|
depth |
int
|
Number of transformer depth. Defaults to 12. |
12
|
mlp_ratio |
float
|
Number of ratio used in MLP. Defaults to 4.0. |
4.0
|
drop_rate |
float
|
The drop ratio used in MLP. Defaults to 0.0. |
0.0
|
drop_path_rate |
float
|
The drop ratio used in DropPath. Defaults to 0.0. |
0.0
|
num_blocks |
int
|
Number of blocks. Defaults to 8. |
8
|
sparsity_threshold |
float
|
The value of threshold for softshrink. Defaults to 0.01. |
0.01
|
hard_thresholding_fraction |
float
|
The value of threshold for keep mode. Defaults to 1.0. |
1.0
|
num_timestamps |
int
|
Number of timestamp. Defaults to 1. |
1
|
Examples:
>>> import ppsci
>>> wind_model = ppsci.arch.AFNONet(("input", ), ("output", ))
>>> model = ppsci.arch.PrecipNet(("input", ), ("output", ), wind_model)
>>> data = paddle.randn([1, 20, 720, 1440])
>>> data_dict = {"input": data}
>>> output = model.forward(data_dict)
>>> print(output['output'].shape)
[1, 1, 720, 1440]
Source code in ppsci/arch/afno.py
560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 |
|
PhyCRNet
¶
Bases: Arch
Physics-informed convolutional-recurrent neural networks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_channels |
int
|
The input channels. |
required |
hidden_channels |
Tuple[int, ...]
|
The hidden channels. |
required |
input_kernel_size |
Tuple[int, ...]
|
The input kernel size(s). |
required |
input_stride |
Tuple[int, ...]
|
The input stride(s). |
required |
input_padding |
Tuple[int, ...]
|
The input padding(s). |
required |
dt |
float
|
The dt parameter. |
required |
num_layers |
Tuple[int, ...]
|
The number of layers. |
required |
upscale_factor |
int
|
The upscale factor. |
required |
step |
int
|
The step(s). Defaults to 1. |
1
|
effective_step |
Tuple[int, ...]
|
The effective step. Defaults to (1, ). |
(1)
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.PhyCRNet(
... input_channels=2,
... hidden_channels=[8, 32, 128, 128],
... input_kernel_size=[4, 4, 4, 3],
... input_stride=[2, 2, 2, 1],
... input_padding=[1, 1, 1, 1],
... dt=0.002,
... num_layers=[3, 1],
... upscale_factor=8
... )
Source code in ppsci/arch/phycrnet.py
75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 |
|
UNetEx
¶
Bases: Arch
U-Net Extension for CFD.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_key |
str
|
Name of function data for input. |
required |
output_key |
str
|
Name of function data for output. |
required |
in_channel |
int
|
Number of channels of input. |
required |
out_channel |
int
|
Number of channels of output. |
required |
kernel_size |
int
|
Size of kernel of convolution layer. Defaults to 3. |
3
|
filters |
Tuple[int, ...]
|
Number of filters. Defaults to (16, 32, 64). |
(16, 32, 64)
|
layers |
int
|
Number of encoders or decoders. Defaults to 3. |
3
|
weight_norm |
bool
|
Whether use weight normalization layer. Defaults to True. |
True
|
batch_norm |
bool
|
Whether add batch normalization layer. Defaults to True. |
True
|
activation |
Type[Layer]
|
Name of activation function. Defaults to nn.ReLU. |
ReLU
|
final_activation |
Optional[Type[Layer]]
|
Name of final activation function. Defaults to None. |
None
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.UNetEx(
... input_key="input",
... output_key="output",
... in_channel=3,
... out_channel=3,
... kernel_size=5,
... filters=(4, 4, 4, 4),
... layers=3,
... weight_norm=False,
... batch_norm=False,
... activation=None,
... final_activation=None,
... )
>>> input_dict = {'input': paddle.rand([4, 3, 4, 4])}
>>> output_dict = model(input_dict)
>>> print(output_dict['output'])
>>> print(output_dict['output'].shape)
[4, 3, 4, 4]
Source code in ppsci/arch/unetex.py
176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 |
|
USCNN
¶
Bases: Arch
Physics-informed convolutional neural networks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("coords"). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("outputV"). |
required |
hidden_size |
Union[int, Tuple[int, ...]]
|
The hidden channel for convolutional layers |
required |
h |
float
|
The spatial step |
required |
nx |
int
|
the number of grids along x-axis |
required |
ny |
int
|
The number of grids along y-axis |
required |
nvar_in |
int
|
input channel. Defaults to 1. |
1
|
nvar_out |
int
|
Output channel. Defaults to 1. |
1
|
pad_singleside |
int
|
Pad for hard boundary constraint. Defaults to 1. |
1
|
k |
int
|
Kernel_size. Defaults to 5. |
5
|
s |
int
|
Stride. Defaults to 1. |
1
|
p |
int
|
Padding. Defaults to 2. |
2
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.USCNN(
... ["coords"],
... ["outputV"],
... [16, 32, 16],
... h=0.01,
... ny=19,
... nx=84,
... nvar_in=2,
... nvar_out=1,
... pad_singleside=1,
... )
Source code in ppsci/arch/uscnn.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 |
|
NowcastNet
¶
Bases: Arch
The NowcastNet model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
input_length |
int
|
Input length. Defaults to 9. |
9
|
total_length |
int
|
Total length. Defaults to 29. |
29
|
image_height |
int
|
Image height. Defaults to 512. |
512
|
image_width |
int
|
Image width. Defaults to 512. |
512
|
image_ch |
int
|
Image channel. Defaults to 2. |
2
|
ngf |
int
|
Noise Projector input length. Defaults to 32. |
32
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.NowcastNet(("input", ), ("output", ))
>>> input_data = paddle.rand([1, 9, 512, 512, 2])
>>> input_dict = {"input": input_data}
>>> output_dict = model(input_dict)
>>> print(output_dict["output"].shape)
[1, 20, 512, 512, 1]
Source code in ppsci/arch/nowcastnet.py
23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
|
HEDeepONets
¶
Bases: Arch
Physical information deep operator networks.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
heat_input_keys |
Tuple[str, ...]
|
Name of input data for heat boundary. |
required |
cold_input_keys |
Tuple[str, ...]
|
Name of input data for cold boundary. |
required |
trunk_input_keys |
Tuple[str, ...]
|
Name of input data for trunk net. |
required |
output_keys |
Tuple[str, ...]
|
Output name of predicted temperature. |
required |
heat_num_loc |
int
|
Number of sampled input data for heat boundary. |
required |
cold_num_loc |
int
|
Number of sampled input data for cold boundary. |
required |
num_features |
int
|
Number of features extracted from heat boundary, same for cold boundary and trunk net. |
required |
branch_num_layers |
int
|
Number of hidden layers of branch net. |
required |
trunk_num_layers |
int
|
Number of hidden layers of trunk net. |
required |
branch_hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size of branch net. An integer for all layers, or list of integer specify each layer's size. |
required |
trunk_hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size of trunk net. An integer for all layers, or list of integer specify each layer's size. |
required |
branch_skip_connection |
bool
|
Whether to use skip connection for branch net. Defaults to False. |
False
|
trunk_skip_connection |
bool
|
Whether to use skip connection for trunk net. Defaults to False. |
False
|
branch_activation |
str
|
Name of activation function for branch net. Defaults to "tanh". |
'tanh'
|
trunk_activation |
str
|
Name of activation function for trunk net. Defaults to "tanh". |
'tanh'
|
branch_weight_norm |
bool
|
Whether to apply weight norm on parameter(s) for branch net. Defaults to False. |
False
|
trunk_weight_norm |
bool
|
Whether to apply weight norm on parameter(s) for trunk net. Defaults to False. |
False
|
use_bias |
bool
|
Whether to add bias on predicted G(u)(y). Defaults to True. |
True
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.HEDeepONets(
... ('qm_h',),
... ('qm_c',),
... ("x",'t'),
... ("T_h",'T_c','T_w'),
... 1,
... 1,
... 100,
... 9,
... 6,
... 256,
... 128,
... branch_activation="swish",
... trunk_activation="swish",
... use_bias=True,
... )
Source code in ppsci/arch/he_deeponets.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 |
|
DGMR
¶
Bases: Arch
Deep Generative Model of Radar. Nowcasting GAN is an attempt to recreate DeepMind's Skillful Nowcasting GAN from https://arxiv.org/abs/2104.00954. but slightly modified for multiple satellite channels
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
forecast_steps |
int
|
Number of steps to predict in the future |
18
|
input_channels |
int
|
Number of input channels per image |
1
|
gen_lr |
float
|
Learning rate for the generator |
5e-05
|
disc_lr |
float
|
Learning rate for the discriminators, shared for both temporal and spatial discriminator |
0.0002
|
conv_type |
str
|
Type of 2d convolution to use, see satflow/models/utils.py for options |
'standard'
|
beta1 |
float
|
Beta1 for Adam optimizer |
0.0
|
beta2 |
float
|
Beta2 for Adam optimizer |
0.999
|
num_samples |
int
|
Number of samples of the latent space to sample for training/validation |
6
|
grid_lambda |
float
|
Lambda for the grid regularization loss |
20.0
|
output_shape |
int
|
Shape of the output predictions, generally should be same as the input shape |
256
|
generation_steps |
int
|
Number of generation steps to use in forward pass, in paper is 6 and the best is chosen for the loss this results in huge amounts of GPU memory though, so less might work better for training. |
6
|
context_channels |
int
|
Number of output channels for the lowest block of conditioning stack |
384
|
latent_channels |
int
|
Number of channels that the latent space should be reshaped to, input dimension into ConvGRU, also affects the number of channels for other linked inputs/outputs |
768
|
Examples:
>>> import ppsci
>>> import paddle
>>> model = ppsci.arch.DGMR(("input", ), ("output", ))
>>> input_dict = {"input": paddle.randn((1, 4, 1, 256, 256))}
>>> output_dict = model(input_dict)
>>> print(output_dict["output"].shape)
[1, 18, 1, 256, 256]
Source code in ppsci/arch/dgmr.py
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
|
ChipDeepONets
¶
Bases: Arch
Multi-branch physics-informed deep operator neural network. The network consists of three branch networks: random heat source, boundary function, and boundary type, as well as a trunk network.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
branch_input_keys |
Tuple[str, ...]
|
Name of input data for internal heat source on branch nets. |
required |
BCtype_input_keys |
Tuple[str, ...]
|
Name of input data for boundary types on branch nets. |
required |
BC_input_keys |
Tuple[str, ...]
|
Name of input data for boundary on branch nets. |
required |
trunk_input_keys |
Tuple[str, ...]
|
Name of input data for trunk net. |
required |
output_keys |
Tuple[str, ...]
|
Output name of predicted temperature. |
required |
num_loc |
int
|
Number of sampled input data for internal heat source. |
required |
bctype_loc |
int
|
Number of sampled input data for boundary types. |
required |
BC_num_loc |
int
|
Number of sampled input data for boundary. |
required |
num_features |
int
|
Number of features extracted from trunk net, same for all branch nets. |
required |
branch_num_layers |
int
|
Number of hidden layers of internal heat source on branch nets. |
required |
BC_num_layers |
int
|
Number of hidden layers of boundary on branch nets. |
required |
trunk_num_layers |
int
|
Number of hidden layers of trunk net. |
required |
branch_hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size of internal heat source on branch nets. An integer for all layers, or list of integer specify each layer's size. |
required |
BC_hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size of boundary on branch nets. An integer for all layers, or list of integer specify each layer's size. |
required |
trunk_hidden_size |
Union[int, Tuple[int, ...]]
|
Number of hidden size of trunk net. An integer for all layers, or list of integer specify each layer's size. |
required |
branch_skip_connection |
bool
|
Whether to use skip connection for internal heat source on branch net. Defaults to False. |
False
|
BC_skip_connection |
bool
|
Whether to use skip connection for boundary on branch net. Defaults to False. |
False
|
trunk_skip_connection |
bool
|
Whether to use skip connection for trunk net. Defaults to False. |
False
|
branch_activation |
str
|
Name of activation function for internal heat source on branch net. Defaults to "tanh". |
'tanh'
|
BC_activation |
str
|
Name of activation function for boundary on branch net. Defaults to "tanh". |
'tanh'
|
trunk_activation |
str
|
Name of activation function for trunk net. Defaults to "tanh". |
'tanh'
|
branch_weight_norm |
bool
|
Whether to apply weight norm on parameter(s) for internal heat source on branch net. Defaults to False. |
False
|
BC_weight_norm |
bool
|
Whether to apply weight norm on parameter(s) for boundary on branch net. Defaults to False. |
False
|
trunk_weight_norm |
bool
|
Whether to apply weight norm on parameter(s) for trunk net. Defaults to False. |
False
|
use_bias |
bool
|
Whether to add bias on predicted G(u)(y). Defaults to True. |
True
|
Examples:
>>> import ppsci
>>> model = ppsci.arch.ChipDeepONets(
... ('u',),
... ('bc',),
... ('bc_data',),
... ("x",'y'),
... ("T",),
... 324,
... 1,
... 76,
... 400,
... 9,
... 9,
... 6,
... 256,
... 256,
... 128,
... branch_activation="swish",
... BC_activation="swish",
... trunk_activation="swish",
... use_bias=True,
... )
Source code in ppsci/arch/chip_deeponets.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 |
|
AutoEncoder
¶
Bases: Arch
AutoEncoder is a class that represents an autoencoder neural network model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
A tuple of input keys. |
required |
output_keys |
Tuple[str, ...]
|
A tuple of output keys. |
required |
input_dim |
int
|
The dimension of the input data. |
required |
latent_dim |
int
|
The dimension of the latent space. |
required |
hidden_dim |
int
|
The dimension of the hidden layer. |
required |
Examples:
>>> import paddle
>>> import ppsci
>>> model = ppsci.arch.AutoEncoder(
... input_keys=("input1",),
... output_keys=("mu", "log_sigma", "decoder_z",),
... input_dim=100,
... latent_dim=50,
... hidden_dim=200
... )
>>> input_dict = {"input1": paddle.rand([200, 100]),}
>>> output_dict = model(input_dict)
>>> print(output_dict["mu"].shape)
[200, 50]
>>> print(output_dict["log_sigma"].shape)
[200, 50]
>>> print(output_dict["decoder_z"].shape)
[200, 100]
Source code in ppsci/arch/vae.py
CuboidTransformer
¶
Bases: Arch
Cuboid Transformer for spatiotemporal forecasting
We adopt the Non-autoregressive encoder-decoder architecture. The decoder takes the multi-scale memory output from the encoder.
The initial downsampling / upsampling layers will be Downsampling: [K x Conv2D --> PatchMerge] Upsampling: [Nearest Interpolation-based Upsample --> K x Conv2D]
x --> downsample (optional) ---> (+pos_embed) ---> enc --> mem_l initial_z (+pos_embed) ---> FC | | |------------| | | y <--- upsample (optional) <--- dec <----------
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
input_shape |
Tuple[int, ...]
|
The shape of the input data. |
required |
target_shape |
Tuple[int, ...]
|
The shape of the target data. |
required |
base_units |
int
|
The base units. Defaults to 128. |
128
|
block_units |
int
|
The block units. Defaults to None. |
None
|
scale_alpha |
float
|
We scale up the channels based on the formula: - round_to(base_units * max(downsample_scale) ** units_alpha, 4). Defaults to 1.0. |
1.0
|
num_heads |
int
|
The number of heads. Defaults to 4. |
4
|
attn_drop |
float
|
The attention dropout. Defaults to 0.0. |
0.0
|
proj_drop |
float
|
The projection dropout. Defaults to 0.0. |
0.0
|
ffn_drop |
float
|
The ffn dropout. Defaults to 0.0. |
0.0
|
downsample |
int
|
The rate of downsample. Defaults to 2. |
2
|
downsample_type |
str
|
The type of downsample. Defaults to "patch_merge". |
'patch_merge'
|
upsample_type |
str
|
The rate of upsample. Defaults to "upsample". |
'upsample'
|
upsample_kernel_size |
int
|
The kernel size of upsample. Defaults to 3. |
3
|
enc_depth |
list
|
The depth of encoder. Defaults to [4, 4, 4]. |
[4, 4, 4]
|
enc_attn_patterns |
str
|
The pattern of encoder attention. Defaults to None. |
None
|
enc_cuboid_size |
list
|
The cuboid size of encoder. Defaults to [(4, 4, 4), (4, 4, 4)]. |
[(4, 4, 4), (4, 4, 4)]
|
enc_cuboid_strategy |
list
|
The cuboid strategy of encoder. Defaults to [("l", "l", "l"), ("d", "d", "d")]. |
[('l', 'l', 'l'), ('d', 'd', 'd')]
|
enc_shift_size |
list
|
The shift size of encoder. Defaults to [(0, 0, 0), (0, 0, 0)]. |
[(0, 0, 0), (0, 0, 0)]
|
enc_use_inter_ffn |
bool
|
Whether to use intermediate FFN for encoder. Defaults to True. |
True
|
dec_depth |
list
|
The depth of decoder. Defaults to [2, 2]. |
[2, 2]
|
dec_cross_start |
int
|
The cross start of decoder. Defaults to 0. |
0
|
dec_self_attn_patterns |
str
|
The partterns of decoder. Defaults to None. |
None
|
dec_self_cuboid_size |
list
|
The cuboid size of decoder. Defaults to [(4, 4, 4), (4, 4, 4)]. |
[(4, 4, 4), (4, 4, 4)]
|
dec_self_cuboid_strategy |
list
|
The strategy of decoder. Defaults to [("l", "l", "l"), ("d", "d", "d")]. |
[('l', 'l', 'l'), ('d', 'd', 'd')]
|
dec_self_shift_size |
list
|
The shift size of decoder. Defaults to [(1, 1, 1), (0, 0, 0)]. |
[(1, 1, 1), (0, 0, 0)]
|
dec_cross_attn_patterns |
_type_
|
The cross attention patterns of decoder. Defaults to None. |
None
|
dec_cross_cuboid_hw |
list
|
The cuboid_hw of decoder. Defaults to [(4, 4), (4, 4)]. |
[(4, 4), (4, 4)]
|
dec_cross_cuboid_strategy |
list
|
The cuboid strategy of decoder. Defaults to [("l", "l", "l"), ("d", "l", "l")]. |
[('l', 'l', 'l'), ('d', 'l', 'l')]
|
dec_cross_shift_hw |
list
|
The shift_hw of decoder. Defaults to [(0, 0), (0, 0)]. |
[(0, 0), (0, 0)]
|
dec_cross_n_temporal |
list
|
The cross_n_temporal of decoder. Defaults to [1, 2]. |
[1, 2]
|
dec_cross_last_n_frames |
int
|
The cross_last_n_frames of decoder. Defaults to None. |
None
|
dec_use_inter_ffn |
bool
|
Whether to use intermediate FFN for decoder. Defaults to True. |
True
|
dec_hierarchical_pos_embed |
bool
|
Whether to use hierarchical pos_embed for decoder. Defaults to False. |
False
|
num_global_vectors |
int
|
The num of global vectors. Defaults to 4. |
4
|
use_dec_self_global |
bool
|
Whether to use global vector for decoder. Defaults to True. |
True
|
dec_self_update_global |
bool
|
Whether to update global vector for decoder. Defaults to True. |
True
|
use_dec_cross_global |
bool
|
Whether to use cross global vector for decoder. Defaults to True. |
True
|
use_global_vector_ffn |
bool
|
Whether to use global vector FFN. Defaults to True. |
True
|
use_global_self_attn |
bool
|
Whether to use global attentions. Defaults to False. |
False
|
separate_global_qkv |
bool
|
Whether to separate global qkv. Defaults to False. |
False
|
global_dim_ratio |
int
|
The ratio of global dim. Defaults to 1. |
1
|
self_pattern |
str
|
The pattern. Defaults to "axial". |
'axial'
|
cross_self_pattern |
str
|
The self cross pattern. Defaults to "axial". |
'axial'
|
cross_pattern |
str
|
The cross pattern. Defaults to "cross_1x1". |
'cross_1x1'
|
z_init_method |
str
|
How the initial input to the decoder is initialized. Defaults to "nearest_interp". |
'nearest_interp'
|
initial_downsample_type |
str
|
The downsample type of initial. Defaults to "conv". |
'conv'
|
initial_downsample_activation |
str
|
The downsample activation of initial. Defaults to "leaky". |
'leaky'
|
initial_downsample_scale |
int
|
The downsample scale of initial. Defaults to 1. |
1
|
initial_downsample_conv_layers |
int
|
The conv layer of downsample of initial. Defaults to 2. |
2
|
final_upsample_conv_layers |
int
|
The conv layer of final upsample. Defaults to 2. |
2
|
initial_downsample_stack_conv_num_layers |
int
|
The num of stack conv layer of initial downsample. Defaults to 1. |
1
|
initial_downsample_stack_conv_dim_list |
list
|
The dim list of stack conv of initial downsample. Defaults to None. |
None
|
initial_downsample_stack_conv_downscale_list |
list
|
The downscale list of stack conv of initial downsample. Defaults to [1]. |
[1]
|
initial_downsample_stack_conv_num_conv_list |
list
|
The num of stack conv list of initial downsample. Defaults to [2]. |
[2]
|
ffn_activation |
str
|
The activation of FFN. Defaults to "leaky". |
'leaky'
|
gated_ffn |
bool
|
Whether to use gate FFN. Defaults to False. |
False
|
norm_layer |
str
|
The type of normilize. Defaults to "layer_norm". |
'layer_norm'
|
padding_type |
str
|
The type of padding. Defaults to "ignore". |
'ignore'
|
pos_embed_type |
str
|
The type of pos embeding. Defaults to "t+hw". |
't+hw'
|
checkpoint_level |
bool
|
Whether to use checkpoint. Defaults to True. |
True
|
use_relative_pos |
bool
|
Whether to use relative pose. Defaults to True. |
True
|
self_attn_use_final_proj |
bool
|
Whether to use final projection. Defaults to True. |
True
|
dec_use_first_self_attn |
bool
|
Whether to use first self attention for decoder. Defaults to False. |
False
|
attn_linear_init_mode |
str
|
The mode of attention linear init. Defaults to "0". |
'0'
|
ffn_linear_init_mode |
str
|
The mode of FFN linear init. Defaults to "0". |
'0'
|
conv_init_mode |
str
|
The mode of conv init. Defaults to "0". |
'0'
|
down_up_linear_init_mode |
str
|
The mode of downsample and upsample linear init. Defaults to "0". |
'0'
|
norm_init_mode |
str
|
The mode of normalization init. Defaults to "0". |
'0'
|
Source code in ppsci/arch/cuboid_transformer.py
419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 |
|
forward(x, verbose=False)
¶
Parameters:
Name | Type | Description | Default |
---|---|---|---|
x |
Tensor
|
Tensor with shape (B, T, H, W, C). |
required |
verbose |
bool
|
If True, print intermediate shapes. |
False
|
Returns:
Name | Type | Description |
---|---|---|
out |
Tensor
|
The output Shape (B, T_out, H, W, C_out) |
Source code in ppsci/arch/cuboid_transformer.py
SFNONet
¶
Bases: Arch
N-Dimensional Tensorized Fourier Neural Operator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
n_modes |
Tuple[int, ...]
|
Number of modes to keep in Fourier Layer, along each dimension
The dimensionality of the SFNO is inferred from |
required |
hidden_channels |
int
|
Width of the FNO (i.e. number of channels) |
required |
in_channels |
int
|
Number of input channels. Defaults to 3. |
3
|
out_channels |
int
|
Number of output channels. Defaults to 1. |
1
|
lifting_channels |
int
|
Number of hidden channels of the lifting block of the FNO. Defaults to 256. |
256
|
projection_channels |
int
|
Number of hidden channels of the projection block of the FNO. Defaults to 256. |
256
|
n_layers |
int
|
Number of Fourier Layers. Defaults to 4. |
4
|
use_mlp |
bool
|
Whether to use an MLP layer after each FNO block. Defaults to False. |
False
|
mlp |
Dict[str, float]
|
Parameters of the MLP. {'expansion': float, 'dropout': float}. Defaults to None. |
None
|
non_linearity |
functional
|
Non-Linearity module to use. Defaults to F.gelu. |
gelu
|
norm |
str
|
Normalization layer to use. Defaults to None. |
None
|
ada_in_features |
(int, optional)
|
The input channles of the adaptive normalization.Defaults to None. |
None
|
preactivation |
bool
|
Whether to use resnet-style preactivation. Defaults to False. |
False
|
fno_skip |
str
|
Type of skip connection to use,{'linear', 'identity', 'soft-gating'}. Defaults to "soft-gating". |
'linear'
|
separable |
bool
|
Whether to use a depthwise separable spectral convolution. Defaults to False. |
False
|
factorization |
str
|
Tensor factorization of the parameters weight to use. * If None, a dense tensor parametrizes the Spectral convolutions. * Otherwise, the specified tensor factorization is used. Defaults to "Tucker". |
None
|
rank |
float
|
Rank of the tensor factorization of the Fourier weights. Defaults to 1.0. |
1.0
|
joint_factorization |
bool
|
Whether all the Fourier Layers should be parametrized by a single tensor (vs one per layer). Defaults to False. |
False
|
implementation |
str
|
{'factorized', 'reconstructed'}, optional. Defaults to "factorized".
If factorization is not None, forward mode to use::
* |
'factorized'
|
domain_padding |
Optional[list]
|
Whether to use percentage of padding. Defaults to None. |
None
|
domain_padding_mode |
str
|
{'symmetric', 'one-sided'}, optional How to perform domain padding, by default 'one-sided'. Defaults to "one-sided". |
'one-sided'
|
fft_norm |
str
|
The normalization mode for the FFT. Defaults to "forward". |
'forward'
|
patching_levels |
int
|
Number of patching levels to use. Defaults to 0. |
0
|
Source code in ppsci/arch/sfnonet.py
392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 |
|
forward(x)
¶
SFNO's forward pass
Source code in ppsci/arch/sfnonet.py
UNONet
¶
Bases: Arch
N-Dimensional U-Shaped Neural Operator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
in_channels |
int
|
Number of input channels. |
required |
out_channels |
int
|
Number of output channels. |
required |
hidden_channels |
int
|
Width of the FNO (i.e. number of channels). |
required |
lifting_channels |
int
|
Number of hidden channels of the lifting block of the FNO. Defaults to 256. |
256
|
projection_channels |
int
|
Number of hidden channels of the projection block of the FNO. Defaults to 256. |
256
|
n_layers |
int
|
Number of Fourier Layers. Defaults to 4. |
4
|
uno_out_channels |
Tuple[int, ...]
|
Number of output channel of each Fourier Layers. Eaxmple: For a Five layer UNO uno_out_channels can be [32,64,64,64,32].c |
None
|
uno_n_modes |
Tuple[Tuple[int, ...], ...]
|
Number of Fourier Modes to use in integral operation of each Fourier Layers (along each dimension). Example: For a five layer UNO with 2D input the uno_n_modes can be: [[5,5],[5,5],[5,5],[5,5],[5,5]]. Defaults to None. |
None
|
uno_scalings |
Tuple[Tuple[int, ...], ...]
|
Scaling Factors for each Fourier Layers. Example: For a five layer UNO with 2D input, the uno_scalings can be : [[1.0,1.0],[0.5,0.5],[1,1],[1,1],[2,2]].Defaults to None. |
None
|
horizontal_skips_map |
Dict
|
A map {...., b: a, ....} denoting horizontal skip connection from a-th layer to b-th layer. If None default skip connection is applied. Example: For a 5 layer UNO architecture, the skip connections can be horizontal_skips_map ={4:0,3:1}.Defaults to None. |
None
|
incremental_n_modes |
(tuple[int], optional)
|
Incremental number of modes to use in Fourier domain. * If not None, this allows to incrementally increase the number of modes in Fourier domain during training. Has to verify n <= N for (n, m) in zip(incremental_n_modes, n_modes). * If None, all the n_modes are used. This can be updated dynamically during training.Defaults to None. |
None
|
use_mlp |
bool
|
Whether to use an MLP layer after each FNO block. Defaults to False. |
False
|
mlp |
Dict[str, float]
|
Parameters of the MLP. {'expansion': float, 'dropout': float}. Defaults to None. |
None
|
non_linearity |
functional
|
Non-Linearity module to use. Defaults to F.gelu. |
gelu
|
norm |
str
|
Normalization layer to use. Defaults to None. |
None
|
ada_in_features |
(Optional[int], optional)
|
The input channles of the adaptive normalization.Defaults to None. |
None
|
preactivation |
bool
|
Whether to use resnet-style preactivation. Defaults to False. |
False
|
fno_skip |
str
|
Type of skip connection to use for fno_block. Defaults to "linear". |
'linear'
|
horizontal_skip |
str
|
Type of skip connection to use for horizontal skip. Defaults to "linear". |
'linear'
|
mlp_skip |
str
|
Type of skip connection to use for mlp. Defaults to "soft-gating". |
'soft-gating'
|
separable |
bool
|
Whether to use a depthwise separable spectral convolution. Defaults to False. |
False
|
factorization |
str
|
Tensor factorization of the parameters weight to use. * If None, a dense tensor parametrizes the Spectral convolutions. * Otherwise, the specified tensor factorization is used. Defaults to "Tucker". |
None
|
rank |
float
|
Rank of the tensor factorization of the Fourier weights. Defaults to 1.0. |
1.0
|
joint_factorization |
bool
|
Whether all the Fourier Layers should be parametrized by a single tensor (vs one per layer). Defaults to False. |
False
|
implementation |
str
|
{'factorized', 'reconstructed'}, optional. Defaults to "factorized".
If factorization is not None, forward mode to use::
* |
'factorized'
|
domain_padding |
Optional[Union[list, float, int]]
|
Whether to use percentage of padding. Defaults to None. |
None
|
domain_padding_mode |
str
|
{'symmetric', 'one-sided'}, optional How to perform domain padding, by default 'one-sided'. Defaults to "one-sided". |
'one-sided'
|
fft_norm |
str
|
The normalization mode for the FFT. Defaults to "forward". |
'forward'
|
patching_levels |
int
|
Number of patching levels to use. Defaults to 0. |
0
|
Source code in ppsci/arch/unonet.py
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 |
|
TFNO1dNet
¶
Bases: FNONet
1D Fourier Neural Operator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
n_modes_height |
Tuple[int, ...]
|
Number of Fourier modes to keep along the height, along each dimension. |
required |
hidden_channels |
int
|
Width of the FNO (i.e. number of channels). |
required |
in_channels |
int
|
Number of input channels. Defaults to 3. |
3
|
out_channels |
int
|
Number of output channels. Defaults to 1. |
1
|
lifting_channels |
int
|
Number of hidden channels of the lifting block of the FNO. Defaults to 256. |
256
|
projection_channels |
int
|
Number of hidden channels of the projection block of the FNO. Defaults to 256. |
256
|
n_layers |
int
|
Number of Fourier Layers. Defaults to 4. |
4
|
use_mlp |
bool
|
Whether to use an MLP layer after each FNO block. Defaults to False. |
False
|
mlp |
dict[str, float]
|
Parameters of the MLP. {'expansion': float, 'dropout': float}. Defaults to None. |
None
|
non_linearity |
functional
|
Non-Linearity module to use. Defaults to F.gelu. |
gelu
|
norm |
module
|
Normalization layer to use. Defaults to None. |
None
|
preactivation |
bool
|
Whether to use resnet-style preactivation. Defaults to False. |
False
|
skip |
str
|
Type of skip connection to use,{'linear', 'identity', 'soft-gating'}. Defaults to "soft-gating". |
'soft-gating'
|
separable |
bool
|
Whether to use a depthwise separable spectral convolution. Defaults to False. |
False
|
factorization |
str
|
Tensor factorization of the parameters weight to use. * If None, a dense tensor parametrizes the Spectral convolutions. * Otherwise, the specified tensor factorization is used. Defaults to "Tucker". |
'Tucker'
|
rank |
float
|
Rank of the tensor factorization of the Fourier weights. Defaults to 1.0. |
1.0
|
joint_factorization |
bool
|
Whether all the Fourier Layers should be parametrized by a single tensor (vs one per layer). Defaults to False. |
False
|
implementation |
str
|
{'factorized', 'reconstructed'}, optional. Defaults to "factorized".
If factorization is not None, forward mode to use::
* |
'factorized'
|
domain_padding |
Optional[Union[list, float, int]]
|
Whether to use percentage of padding. Defaults to None. |
None
|
domain_padding_mode |
str
|
{'symmetric', 'one-sided'}, optional How to perform domain padding, by default 'one-sided'. Defaults to "one-sided". |
'one-sided'
|
fft_norm |
str
|
The normalization mode for the FFT. Defaults to "forward". |
'forward'
|
patching_levels |
int
|
Number of patching levels to use. Defaults to 0. |
0
|
SpectralConv |
layer
|
Spectral convolution layer to use. Defaults to fno_block.FactorizedSpectralConv. |
FactorizedSpectralConv
|
Source code in ppsci/arch/tfnonet.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 |
|
TFNO2dNet
¶
Bases: FNONet
2D Fourier Neural Operator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
n_modes_height |
int
|
Number of Fourier modes to keep along the height. |
required |
n_modes_width |
int
|
Number of modes to keep in Fourier Layer, along the width. |
required |
hidden_channels |
int
|
Width of the FNO (i.e. number of channels). |
required |
in_channels |
int
|
Number of input channels. Defaults to 3. |
3
|
out_channels |
int
|
Number of output channels. Defaults to 1. |
1
|
lifting_channels |
int
|
Number of hidden channels of the lifting block of the FNO. Defaults to 256. |
256
|
projection_channels |
int
|
Number of hidden channels of the projection block of the FNO. Defaults to 256. |
256
|
n_layers |
int
|
Number of Fourier Layers. Defaults to 4. |
4
|
use_mlp |
bool
|
Whether to use an MLP layer after each FNO block. Defaults to False. |
False
|
mlp |
Dict[str, float]
|
Parameters of the MLP. {'expansion': float, 'dropout': float}. Defaults to None. |
None
|
non_linearity |
Layer
|
Non-Linearity module to use. Defaults to F.gelu. |
gelu
|
norm |
module
|
Normalization layer to use. Defaults to None. |
None
|
preactivation |
bool
|
Whether to use resnet-style preactivation. Defaults to False. |
False
|
skip |
str
|
Type of skip connection to use,{'linear', 'identity', 'soft-gating'}. Defaults to "soft-gating". |
'soft-gating'
|
separable |
bool
|
Whether to use a depthwise separable spectral convolution. Defaults to False. |
False
|
factorization |
str
|
Tensor factorization of the parameters weight to use. * If None, a dense tensor parametrizes the Spectral convolutions. * Otherwise, the specified tensor factorization is used. Defaults to "Tucker". |
'Tucker'
|
rank |
float
|
Rank of the tensor factorization of the Fourier weights. Defaults to 1.0. |
1.0
|
joint_factorization |
bool
|
Whether all the Fourier Layers should be parametrized by a single tensor (vs one per layer). Defaults to False. |
False
|
implementation |
str
|
{'factorized', 'reconstructed'}, optional. Defaults to "factorized".
If factorization is not None, forward mode to use::
* |
'factorized'
|
domain_padding |
Union[list, float, int]
|
Whether to use percentage of padding. Defaults to None. |
None
|
domain_padding_mode |
str
|
{'symmetric', 'one-sided'}, optional How to perform domain padding, by default 'one-sided'. Defaults to "one-sided". |
'one-sided'
|
fft_norm |
str
|
The normalization mode for the FFT. Defaults to "forward". |
'forward'
|
patching_levels |
int
|
Number of patching levels to use. Defaults to 0. |
0
|
SpectralConv |
layer
|
Spectral convolution layer to use. Defaults to fno_block.FactorizedSpectralConv. |
FactorizedSpectralConv
|
Source code in ppsci/arch/tfnonet.py
301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 |
|
TFNO3dNet
¶
Bases: FNONet
3D Fourier Neural Operator.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_keys |
Tuple[str, ...]
|
Name of input keys, such as ("input",). |
required |
output_keys |
Tuple[str, ...]
|
Name of output keys, such as ("output",). |
required |
n_modes_height |
int
|
Number of Fourier modes to keep along the height. |
required |
n_modes_width |
int
|
Number of modes to keep in Fourier Layer, along the width. |
required |
n_modes_depth |
int
|
Number of Fourier modes to keep along the depth. |
required |
hidden_channels |
int
|
Width of the FNO (i.e. number of channels). |
required |
in_channels |
int
|
Number of input channels. Defaults to 3. |
3
|
out_channels |
int
|
Number of output channels. Defaults to 1. |
1
|
lifting_channels |
int
|
Number of hidden channels of the lifting block of the FNO. Defaults to 256. |
256
|
projection_channels |
int
|
Number of hidden channels of the projection block of the FNO. Defaults to 256. |
256
|
n_layers |
int
|
Number of Fourier Layers. Defaults to 4. |
4
|
use_mlp |
bool
|
Whether to use an MLP layer after each FNO block. Defaults to False. |
False
|
mlp |
Dict[str, float]
|
Parameters of the MLP. {'expansion': float, 'dropout': float}. Defaults to None. |
None
|
non_linearity |
Layer
|
Non-Linearity module to use. Defaults to F.gelu. |
gelu
|
norm |
module
|
Normalization layer to use. Defaults to None. |
None
|
preactivation |
bool
|
Whether to use resnet-style preactivation. Defaults to False. |
False
|
skip |
str
|
Type of skip connection to use,{'linear', 'identity', 'soft-gating'}. Defaults to "soft-gating". |
'soft-gating'
|
separable |
bool
|
Whether to use a depthwise separable spectral convolution. Defaults to False. |
False
|
factorization |
str
|
Tensor factorization of the parameters weight to use. * If None, a dense tensor parametrizes the Spectral convolutions. * Otherwise, the specified tensor factorization is used. Defaults to "Tucker". |
'Tucker'
|
rank |
float
|
Rank of the tensor factorization of the Fourier weights. Defaults to 1.0. |
1.0
|
joint_factorization |
bool
|
Whether all the Fourier Layers should be parametrized by a single tensor (vs one per layer). Defaults to False. |
False
|
implementation |
str
|
{'factorized', 'reconstructed'}, optional. Defaults to "factorized".
If factorization is not None, forward mode to use::
* |
'factorized'
|
domain_padding |
str
|
Whether to use percentage of padding. Defaults to None. |
None
|
domain_padding_mode |
str
|
{'symmetric', 'one-sided'}, optional How to perform domain padding, by default 'one-sided'. Defaults to "one-sided". |
'one-sided'
|
fft_norm |
str
|
The normalization mode for the FFT. Defaults to "forward". |
'forward'
|
patching_levels |
int
|
Number of patching levels to use. Defaults to 0. |
0
|
SpectralConv |
layer
|
Spectral convolution layer to use. Defaults to fno_block. FactorizedSpectralConv. |
FactorizedSpectralConv
|
Source code in ppsci/arch/tfnonet.py
408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 |
|