场景文本识别算法-NRTR¶
1. 算法简介¶
论文信息:
NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition Fenfen Sheng and Zhineng Chen and Bo Xu ICDAR, 2019
NRTR
使用MJSynth和SynthText两个文字识别数据集训练,在IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE数据集上进行评估,算法复现效果如下:
模型 | 骨干网络 | 配置文件 | Acc | 下载链接 |
---|---|---|---|---|
NRTR | MTB | rec_mtb_nrtr.yml | 84.21% | 训练模型 |
2. 环境配置¶
请先参考《运行环境准备》配置PaddleOCR运行环境,参考《项目克隆》克隆项目代码。
3. 模型训练、评估、预测¶
3.1 模型训练¶
请参考文本识别训练教程。PaddleOCR对代码进行了模块化,训练NRTR
识别模型时需要更换配置文件为NRTR
的配置文件。
启动训练¶
具体地,在完成数据准备后,便可以启动训练,训练命令如下:
3.2 评估¶
可下载已训练完成的模型文件,使用如下命令进行评估:
3.3 预测¶
使用如下命令进行单张图片预测:
4. 推理部署¶
4.1 Python推理¶
首先将训练得到best模型,转换成inference model。这里以训练完成的模型为例(模型下载地址 ),可以使用如下命令进行转换:
注意:
- 如果您是在自己的数据集上训练的模型,并且调整了字典文件,请注意修改配置文件中的
character_dict_path
是否是所需要的字典文件。 - 如果您修改了训练时的输入大小,请修改
tools/export_model.py
文件中的对应NRTR的infer_shape
。
转换成功后,在目录下有三个文件:
执行如下命令进行模型推理:
执行命令后,上面图像的预测结果(识别的文本和得分)会打印到屏幕上,示例如下:
注意:
- 训练上述模型采用的图像分辨率是[1,32,100],需要通过参数
rec_image_shape
设置为您训练时的识别图像形状。 - 在推理时需要设置参数
rec_char_dict_path
指定字典,如果您修改了字典,请修改该参数为您的字典文件。 - 如果您修改了预处理方法,需修改
tools/infer/predict_rec.py
中NRTR的预处理为您的预处理方法。
4.2 C++推理部署¶
由于C++预处理后处理还未支持NRTR,所以暂未支持
4.3 Serving服务化部署¶
暂不支持
4.4 更多推理部署¶
暂不支持
5. FAQ¶
NRTR
论文中使用Beam搜索进行解码字符,但是速度较慢,这里默认未使用Beam搜索,以贪婪搜索进行解码字符。
6. 发行公告¶
-
release/2.6更新NRTR代码结构,新版NRTR可加载旧版(release/2.5及之前)模型参数,使用下面示例代码将旧版模型参数转换为新版模型参数:
详情
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122
params = paddle.load('path/' + '.pdparams') # 旧版本参数 state_dict = model.state_dict() # 新版模型参数 new_state_dict = {} for k1, v1 in state_dict.items(): k = k1 if 'encoder' in k and 'self_attn' in k and 'qkv' in k and 'weight' in k: k_para = k[:13] + 'layers.' + k[13:] q = params[k_para.replace('qkv', 'conv1')].transpose((1, 0, 2, 3)) k = params[k_para.replace('qkv', 'conv2')].transpose((1, 0, 2, 3)) v = params[k_para.replace('qkv', 'conv3')].transpose((1, 0, 2, 3)) new_state_dict[k1] = np.concatenate([q[:, :, 0, 0], k[:, :, 0, 0], v[:, :, 0, 0]], -1) elif 'encoder' in k and 'self_attn' in k and 'qkv' in k and 'bias' in k: k_para = k[:13] + 'layers.' + k[13:] q = params[k_para.replace('qkv', 'conv1')] k = params[k_para.replace('qkv', 'conv2')] v = params[k_para.replace('qkv', 'conv3')] new_state_dict[k1] = np.concatenate([q, k, v], -1) elif 'encoder' in k and 'self_attn' in k and 'out_proj' in k: k_para = k[:13] + 'layers.' + k[13:] new_state_dict[k1] = params[k_para] elif 'encoder' in k and 'norm3' in k: k_para = k[:13] + 'layers.' + k[13:] new_state_dict[k1] = params[k_para.replace('norm3', 'norm2')] elif 'encoder' in k and 'norm1' in k: k_para = k[:13] + 'layers.' + k[13:] new_state_dict[k1] = params[k_para] elif 'decoder' in k and 'self_attn' in k and 'qkv' in k and 'weight' in k: k_para = k[:13] + 'layers.' + k[13:] q = params[k_para.replace('qkv', 'conv1')].transpose((1, 0, 2, 3)) k = params[k_para.replace('qkv', 'conv2')].transpose((1, 0, 2, 3)) v = params[k_para.replace('qkv', 'conv3')].transpose((1, 0, 2, 3)) new_state_dict[k1] = np.concatenate([q[:, :, 0, 0], k[:, :, 0, 0], v[:, :, 0, 0]], -1) elif 'decoder' in k and 'self_attn' in k and 'qkv' in k and 'bias' in k: k_para = k[:13] + 'layers.' + k[13:] q = params[k_para.replace('qkv', 'conv1')] k = params[k_para.replace('qkv', 'conv2')] v = params[k_para.replace('qkv', 'conv3')] new_state_dict[k1] = np.concatenate([q, k, v], -1) elif 'decoder' in k and 'self_attn' in k and 'out_proj' in k: k_para = k[:13] + 'layers.' + k[13:] new_state_dict[k1] = params[k_para] elif 'decoder' in k and 'cross_attn' in k and 'q' in k and 'weight' in k: k_para = k[:13] + 'layers.' + k[13:] k_para = k_para.replace('cross_attn', 'multihead_attn') q = params[k_para.replace('q', 'conv1')].transpose((1, 0, 2, 3)) new_state_dict[k1] = q[:, :, 0, 0] elif 'decoder' in k and 'cross_attn' in k and 'q' in k and 'bias' in k: k_para = k[:13] + 'layers.' + k[13:] k_para = k_para.replace('cross_attn', 'multihead_attn') q = params[k_para.replace('q', 'conv1')] new_state_dict[k1] = q elif 'decoder' in k and 'cross_attn' in k and 'kv' in k and 'weight' in k: k_para = k[:13] + 'layers.' + k[13:] k_para = k_para.replace('cross_attn', 'multihead_attn') k = params[k_para.replace('kv', 'conv2')].transpose((1, 0, 2, 3)) v = params[k_para.replace('kv', 'conv3')].transpose((1, 0, 2, 3)) new_state_dict[k1] = np.concatenate([k[:, :, 0, 0], v[:, :, 0, 0]], -1) elif 'decoder' in k and 'cross_attn' in k and 'kv' in k and 'bias' in k: k_para = k[:13] + 'layers.' + k[13:] k_para = k_para.replace('cross_attn', 'multihead_attn') k = params[k_para.replace('kv', 'conv2')] v = params[k_para.replace('kv', 'conv3')] new_state_dict[k1] = np.concatenate([k, v], -1) elif 'decoder' in k and 'cross_attn' in k and 'out_proj' in k: k_para = k[:13] + 'layers.' + k[13:] k_para = k_para.replace('cross_attn', 'multihead_attn') new_state_dict[k1] = params[k_para] elif 'decoder' in k and 'norm' in k: k_para = k[:13] + 'layers.' + k[13:] new_state_dict[k1] = params[k_para] elif 'mlp' in k and 'weight' in k: k_para = k[:13] + 'layers.' + k[13:] k_para = k_para.replace('fc', 'conv') k_para = k_para.replace('mlp.', '') w = params[k_para].transpose((1, 0, 2, 3)) new_state_dict[k1] = w[:, :, 0, 0] elif 'mlp' in k and 'bias' in k: k_para = k[:13] + 'layers.' + k[13:] k_para = k_para.replace('fc', 'conv') k_para = k_para.replace('mlp.', '') w = params[k_para] new_state_dict[k1] = w else: new_state_dict[k1] = params[k1] if list(new_state_dict[k1].shape) != list(v1.shape): print(k1) for k, v1 in state_dict.items(): if k not in new_state_dict.keys(): print(1, k) elif list(new_state_dict[k].shape) != list(v1.shape): print(2, k) model.set_state_dict(new_state_dict) paddle.save(model.state_dict(), 'nrtrnew_from_old_params.pdparams')
-
新版相比与旧版,代码结构简洁,推理速度有所提高。
引用¶
@article{Sheng2019NRTR,
title = {NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition},
author = {Fenfen Sheng and Zhineng Chen and Bo Xu},
booktitle = {ICDAR},
year = {2019},
url = {http://arxiv.org/abs/1806.00926},
pages = {781-786}
}