跳转至

EPNN

# linux
wget -nc https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstate-16-plas.dat -P ./datasets/
wget -nc https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstress-16-plas.dat -P ./datasets/
# windows
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstate-16-plas.dat --create-dirs -o ./datasets/dstate-16-plas.dat
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstress-16-plas.dat --create-dirs -o ./datasets/dstress-16-plas.dat
python epnn.py
# linux
wget -nc https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstate-16-plas.dat -P ./datasets/
wget -nc https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstress-16-plas.dat -P ./datasets/
# windows
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstate-16-plas.dat --create-dirs -o ./datasets/dstate-16-plas.dat
# curl https://paddle-org.bj.bcebos.com/paddlescience/datasets/epnn/dstress-16-plas.dat --create-dirs -o ./datasets/dstress-16-plas.dat
python epnn.py mode=eval EVAL.pretrained_model_path=https://paddle-org.bj.bcebos.com/paddlescience/models/epnn/epnn_pretrained.pdparams
预训练模型 指标
epnn_pretrained.pdparams error(total): 3.96903
error(error_elasto): 0.65328
error(error_plastic): 3.04176
error(error_stress): 0.27399

1. 背景简介

这里主要为复现 Elasto-Plastic Neural Network (EPNN) 的 Physics-Informed Neural Network (PINN) 代理模型。将这些物理嵌入神经网络的架构中,可以更有效地训练网络,同时使用更少的数据进行训练,同时增强对训练数据外加载制度的推断能力。EPNN 的架构是模型和材料无关的,即它可以适应各种弹塑性材料类型,包括地质材料和金属;并且实验数据可以直接用于训练网络。为了证明所提出架构的稳健性,我们将其一般框架应用于砂土的弹塑性行为。EPNN 在预测不同初始密度砂土的未观测应变控制加载路径方面优于常规神经网络架构。

2. 问题定义

在神经网络中,信息通过连接的神经元流动。神经网络中每个链接的“强度”是由一个可变的权重决定的:

\[ z_l^{\mathrm{i}}=W_{k l}^{\mathrm{i}-1, \mathrm{i}} a_k^{\mathrm{i}-1}+b^{\mathrm{i}-1}, \quad k=1: N^{\mathrm{i}-1} \quad \text { or } \quad \mathbf{z}^{\mathrm{i}}=\mathbf{a}^{\mathrm{i}-1} \mathbf{W}^{\mathrm{i}-1, \mathrm{i}}+b^{\mathrm{i}-1} \mathbf{I} \]

其中 \(b\) 是偏置项;\(N\) 为不同层中神经元数量;\(I\) 指的是所有元素都为 1 的单位向量。

3. 问题求解

接下来开始讲解如何将问题一步一步地转化为 PaddleScience 代码,用深度学习的方法求解该问题。 为了快速理解 PaddleScience,接下来仅对模型构建、方程构建、计算域构建等关键步骤进行阐述,而其余细节请参考 API文档

3.1 模型构建

在 EPNN 问题中,建立网络,用 PaddleScience 代码表示如下

node_sizes_state_plastic.extend(hl_nodes_plastic)
node_sizes_stress.extend(hl_nodes_elasto)
node_sizes_state_elasto.extend([state_y_output_size - 3])
node_sizes_state_plastic.extend([state_y_output_size - 1])
node_sizes_stress.extend([1])

activation_elasto = "leaky_relu"
activation_plastic = "leaky_relu"
activations_elasto = [activation_elasto]
activations_plastic = [activation_plastic]
activations_elasto.extend([activation_elasto for ii in range(nhlayers)])
activations_plastic.extend([activation_plastic for ii in range(NHLAYERS_PLASTIC)])
activations_elasto.extend([activation_elasto])
activations_plastic.extend([activation_plastic])
drop_p = 0.0
n_state_elasto = ppsci.arch.Epnn(
    ("state_x",),
    ("out_state_elasto",),
    tuple(node_sizes_state_elasto),
    tuple(activations_elasto),
    drop_p,

EPNN 参数 input_keys 是输入字段名,output_keys 是输出字段名,node_sizes 是节点大小列表,activations 是激活函数字符串列表,drop_p 是节点丢弃概率。

3.2 数据生成

本案例涉及读取数据生成,如下所示

(
    input_dict_train,
    label_dict_train,
    input_dict_val,
    label_dict_val,
) = functions.get_data(cfg.DATASET_STATE, cfg.DATASET_STRESS, cfg.NTRAIN_SIZE)
        n_train = math.floor(self.train_p * self.x.shape[0])
        n_cross_valid = math.floor(self.cross_valid_p * self.x.shape[0])
        n_test = math.floor(self.test_p * self.x.shape[0])
        self.x_train = self.x[shuffled_indices[0:n_train]]
        self.y_train = self.y[shuffled_indices[0:n_train]]
        self.x_valid = self.x[shuffled_indices[n_train : n_train + n_cross_valid]]
        self.y_valid = self.y[shuffled_indices[n_train : n_train + n_cross_valid]]
        self.x_test = self.x[
            shuffled_indices[n_train + n_cross_valid : n_train + n_cross_valid + n_test]
        ]
        self.y_test = self.y[
            shuffled_indices[n_train + n_cross_valid : n_train + n_cross_valid + n_test]
        ]


def get_data(dataset_state, dataset_stress, ntrain_size):

这里使用 Data 读取文件构造数据类,然后使用 get_shuffled_data 混淆数据,然后计算需要获取的混淆数据数量 itrain,最后使用 get 获取每组 itrain 数量的 10 组数据。

3.3 约束构建

设置训练数据集和损失计算函数,返回字段,代码如下所示:

output_keys = [
    "state_x",
    "state_y",
    "stress_x",
    "stress_y",
    "out_state_elasto",
    "out_state_plastic",
    "out_stress",
]
sup_constraint_pde = ppsci.constraint.SupervisedConstraint(
    {
        "dataset": {
            "name": "NamedArrayDataset",
            "input": input_dict_train,
            "label": label_dict_train,
        },
        "batch_size": 1,
        "num_workers": 0,
    },
    ppsci.loss.FunctionalLoss(functions.train_loss_func),
    {key: (lambda out, k=key: out[k]) for key in output_keys},
    name="sup_train",
)
constraint_pde = {sup_constraint_pde.name: sup_constraint_pde}

SupervisedConstraint 的第一个参数是监督约束的读取配置,配置中 “dataset” 字段表示使用的训练数据集信息,其各个字段分别表示:

  1. name: 数据集类型,此处 "NamedArrayDataset" 表示顺序读取的数据集;
  2. input: 输入数据集;
  3. label: 标签数据集;

第二个参数是损失函数,此处使用自定义函数 train_loss_func

第三个参数是方程表达式,用于描述如何计算约束目标,计算后的值将会按照指定名称存入输出列表中,从而保证 loss 计算时可以使用这些值。

第四个参数是约束条件的名字,我们需要给每一个约束条件命名,方便后续对其索引。

在约束构建完毕之后,以我们刚才的命名为关键字,封装到一个字典中,方便后续访问。

3.4 评估器构建

与约束同理,本问题使用 ppsci.validate.SupervisedValidator 构建评估器,参数含义也与约束构建类似,唯一的区别是评价指标 metric。代码如下所示:

sup_validator_pde = ppsci.validate.SupervisedValidator(
    {
        "dataset": {
            "name": "NamedArrayDataset",
            "input": input_dict_val,
            "label": label_dict_val,
        },
        "batch_size": 1,
        "num_workers": 0,
    },
    ppsci.loss.FunctionalLoss(functions.eval_loss_func),
    {key: (lambda out, k=key: out[k]) for key in output_keys},
    metric={"metric": ppsci.metric.FunctionalMetric(functions.metric_expr)},
    name="sup_valid",
)
validator_pde = {sup_validator_pde.name: sup_validator_pde}

3.5 超参数设定

接下来我们需要指定训练轮数,此处我们按实验经验,使用 10000 轮训练轮数。iters_per_epoch 为 1。

epochs: 10000
iters_per_epoch: 1

3.6 优化器构建

训练过程会调用优化器来更新模型参数,此处选择较为常用的 Adam 优化器,并配合使用机器学习中常用的 ExponentialDecay 学习率调整策略。

由于使用多个模型,需要设置多个优化器,对 EPNN 网络部分,需要设置 Adam 优化器。

    ("out_state_plastic",),
    tuple(node_sizes_state_plastic),
    tuple(activations_plastic),
    drop_p,
)
n_stress = ppsci.arch.Epnn(
    ("state_x_f",),
    ("out_stress",),
    tuple(node_sizes_stress),
    tuple(activations_elasto),

然后对增加的 gkratio 参数,需要再设置优化器。

    )
    return (n_state_elasto, n_state_plastic, n_stress)


def get_optimizer_list(model_list, cfg):
    optimizer_list = []
    lr_list = [0.001, 0.001, 0.01]
    for i, model in enumerate(model_list):

优化器按顺序优化,代码汇总为:

        ("out_state_plastic",),
        tuple(node_sizes_state_plastic),
        tuple(activations_plastic),
        drop_p,
    )
    n_stress = ppsci.arch.Epnn(
        ("state_x_f",),
        ("out_stress",),
        tuple(node_sizes_stress),
        tuple(activations_elasto),
        drop_p,
    )
    return (n_state_elasto, n_state_plastic, n_stress)


def get_optimizer_list(model_list, cfg):
    optimizer_list = []
    lr_list = [0.001, 0.001, 0.01]
    for i, model in enumerate(model_list):

3.7 自定义 loss

由于本问题包含无监督学习,数据中不存在标签数据,loss 根据模型返回数据计算得到,因此需要自定义 loss。方法为先定义相关函数,再将函数名作为参数传给 FunctionalLossFunctionalMetric

需要注意自定义 loss 函数的输入输出参数需要与 PaddleScience 中如 MSE 等其他函数保持一致,即输入为模型输出 output_dict 等字典变量,loss 函数输出为 loss 值 paddle.Tensor

相关的自定义 loss 函数使用 MAELoss 计算,代码为

        )
    }


def train_loss_func(output_dict, *args) -> paddle.Tensor:
    """For model calculation of loss in model.train().

    Args:
        output_dict (Dict[str, paddle.Tensor]): The output dict.

    Returns:
        paddle.Tensor: Loss value.
    """

3.8 模型训练与评估

完成上述设置之后,只需要将上述实例化的对象按顺序传递给 ppsci.solver.Solver

solver = ppsci.solver.Solver(
    model_list_obj,
    constraint_pde,
    cfg.output_dir,
    optimizer_list,
    None,
    cfg.TRAIN.epochs,
    cfg.TRAIN.iters_per_epoch,
    save_freq=cfg.TRAIN.save_freq,
    eval_during_train=cfg.TRAIN.eval_during_train,
    validator=validator_pde,
    eval_with_no_grad=cfg.EVAL.eval_with_no_grad,
)

模型训练时设置 eval_during_train 为 True,将在每次训练后评估。

eval_during_train: true

最后启动训练即可:

solver.train()

4. 完整代码

epnn.py
# Copyright (c) 2023 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""
Reference: https://github.com/meghbali/ANNElastoplasticity
"""

from os import path as osp

import functions
import hydra
from omegaconf import DictConfig

import ppsci
from ppsci.utils import logger


def train(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)

    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, f"{cfg.mode}.log"), "info")

    (
        input_dict_train,
        label_dict_train,
        input_dict_val,
        label_dict_val,
    ) = functions.get_data(cfg.DATASET_STATE, cfg.DATASET_STRESS, cfg.NTRAIN_SIZE)
    model_list = functions.get_model_list(
        cfg.MODEL.ihlayers,
        cfg.MODEL.ineurons,
        input_dict_train["state_x"][0].shape[1],
        input_dict_train["state_y"][0].shape[1],
        input_dict_train["stress_x"][0].shape[1],
    )
    optimizer_list = functions.get_optimizer_list(model_list, cfg)
    model_state_elasto, model_state_plastic, model_stress = model_list
    model_list_obj = ppsci.arch.ModelList(model_list)

    def _transform_in_stress(_in):
        return functions.transform_in_stress(
            _in, model_state_elasto, "out_state_elasto"
        )

    model_state_elasto.register_input_transform(functions.transform_in)
    model_state_plastic.register_input_transform(functions.transform_in)
    model_stress.register_input_transform(_transform_in_stress)
    model_stress.register_output_transform(functions.transform_out)

    output_keys = [
        "state_x",
        "state_y",
        "stress_x",
        "stress_y",
        "out_state_elasto",
        "out_state_plastic",
        "out_stress",
    ]
    sup_constraint_pde = ppsci.constraint.SupervisedConstraint(
        {
            "dataset": {
                "name": "NamedArrayDataset",
                "input": input_dict_train,
                "label": label_dict_train,
            },
            "batch_size": 1,
            "num_workers": 0,
        },
        ppsci.loss.FunctionalLoss(functions.train_loss_func),
        {key: (lambda out, k=key: out[k]) for key in output_keys},
        name="sup_train",
    )
    constraint_pde = {sup_constraint_pde.name: sup_constraint_pde}

    sup_validator_pde = ppsci.validate.SupervisedValidator(
        {
            "dataset": {
                "name": "NamedArrayDataset",
                "input": input_dict_val,
                "label": label_dict_val,
            },
            "batch_size": 1,
            "num_workers": 0,
        },
        ppsci.loss.FunctionalLoss(functions.eval_loss_func),
        {key: (lambda out, k=key: out[k]) for key in output_keys},
        metric={"metric": ppsci.metric.FunctionalMetric(functions.metric_expr)},
        name="sup_valid",
    )
    validator_pde = {sup_validator_pde.name: sup_validator_pde}

    # initialize solver
    solver = ppsci.solver.Solver(
        model_list_obj,
        constraint_pde,
        cfg.output_dir,
        optimizer_list,
        None,
        cfg.TRAIN.epochs,
        cfg.TRAIN.iters_per_epoch,
        save_freq=cfg.TRAIN.save_freq,
        eval_during_train=cfg.TRAIN.eval_during_train,
        validator=validator_pde,
        eval_with_no_grad=cfg.EVAL.eval_with_no_grad,
    )

    # train model
    solver.train()
    functions.plotting(cfg.output_dir)


def evaluate(cfg: DictConfig):
    # set random seed for reproducibility
    ppsci.utils.misc.set_random_seed(cfg.seed)
    # initialize logger
    logger.init_logger("ppsci", osp.join(cfg.output_dir, f"{cfg.mode}.log"), "info")

    (
        input_dict_train,
        _,
        input_dict_val,
        label_dict_val,
    ) = functions.get_data(cfg.DATASET_STATE, cfg.DATASET_STRESS, cfg.NTRAIN_SIZE)
    model_list = functions.get_model_list(
        cfg.MODEL.ihlayers,
        cfg.MODEL.ineurons,
        input_dict_train["state_x"][0].shape[1],
        input_dict_train["state_y"][0].shape[1],
        input_dict_train["stress_x"][0].shape[1],
    )
    model_state_elasto, model_state_plastic, model_stress = model_list
    model_list_obj = ppsci.arch.ModelList(model_list)

    def _transform_in_stress(_in):
        return functions.transform_in_stress(
            _in, model_state_elasto, "out_state_elasto"
        )

    model_state_elasto.register_input_transform(functions.transform_in)
    model_state_plastic.register_input_transform(functions.transform_in)
    model_stress.register_input_transform(_transform_in_stress)
    model_stress.register_output_transform(functions.transform_out)

    output_keys = [
        "state_x",
        "state_y",
        "stress_x",
        "stress_y",
        "out_state_elasto",
        "out_state_plastic",
        "out_stress",
    ]
    sup_validator_pde = ppsci.validate.SupervisedValidator(
        {
            "dataset": {
                "name": "NamedArrayDataset",
                "input": input_dict_val,
                "label": label_dict_val,
            },
            "batch_size": 1,
            "num_workers": 0,
        },
        ppsci.loss.FunctionalLoss(functions.eval_loss_func),
        {key: (lambda out, k=key: out[k]) for key in output_keys},
        metric={"metric": ppsci.metric.FunctionalMetric(functions.metric_expr)},
        name="sup_valid",
    )
    validator_pde = {sup_validator_pde.name: sup_validator_pde}
    functions.OUTPUT_DIR = cfg.output_dir

    # initialize solver
    solver = ppsci.solver.Solver(
        model_list_obj,
        output_dir=cfg.output_dir,
        validator=validator_pde,
        pretrained_model_path=cfg.EVAL.pretrained_model_path,
        eval_with_no_grad=cfg.EVAL.eval_with_no_grad,
    )
    # evaluate
    solver.eval()


@hydra.main(version_base=None, config_path="./conf", config_name="epnn.yaml")
def main(cfg: DictConfig):
    if cfg.mode == "train":
        train(cfg)
    elif cfg.mode == "eval":
        evaluate(cfg)
    else:
        raise ValueError(f"cfg.mode should in ['train', 'eval'], but got '{cfg.mode}'")


if __name__ == "__main__":
    main()

5. 结果展示

EPNN 案例针对 epoch=10000 的参数配置进行了实验,结果返回 Loss 为 0.00471。

下图分别为不同 epoch 的 Loss, Training error, Cross validation error 图形:

loss_trend

训练 loss 图形

6. 参考资料