Skip to content

General Chinese and English OCR dataset

This is a collection of commonly used Chinese datasets, which is being updated continuously. You are welcome to contribute to this list~

In addition to opensource data, users can also use synthesis tools to synthesize data themselves. Current available synthesis tools include text_renderer, SynthText, TextRecognitionDataGenerator, etc.

1. ICDAR2019-LSVT

2. ICDAR2017-RCTW-17

  • Data sourceshttps://rctw.vlrlab.net/
  • Introduction:It contains 12000 + images, most of them are collected in the wild through mobile camera. Some are screenshots. These images show a variety of scenes, including street views, posters, menus, indoor scenes and screenshots of mobile applications.
  • Download linkhttps://rctw.vlrlab.net/dataset/

3. Chinese Street View Text Recognition

  • Data sourceshttps://aistudio.baidu.com/aistudio/competition/detail/8
  • Introduction:A total of 290000 pictures are included, of which 210000 are used as training sets (with labels) and 80000 are used as test sets (without labels). The dataset is collected from the Chinese street view, and is formed by by cutting out the text line area (such as shop signs, landmarks, etc.) in the street view picture. All the images are preprocessed: by using affine transform, the text area is proportionally mapped to a picture with a height of 48 pixels, as shown in the figure:

    (a) Label: 魅派集成吊顶

    (b) Label: 母婴用品连锁 - Download link https://aistudio.baidu.com/aistudio/datasetdetail/8429

4. Chinese Document Text Recognition

5、ICDAR2019-ArT

6. Electronic seal dataset

References

ICDAR 2019-LSVT Challenge

@article{sun2019icdar,
  title={ICDAR 2019 Competition on Large-scale Street View Text with Partial Labeling--RRC-LSVT},
  author={Sun, Yipeng and Ni, Zihan and Chng, Chee-Kheng and Liu, Yuliang and Luo, Canjie and Ng, Chun Chet and Han, Junyu and Ding, Errui and Liu, Jingtuo and Karatzas, Dimosthenis and others},
  journal={arXiv preprint arXiv:1909.07741},
  year={2019}
}

ICDAR 2019-ArT Challenge

@article{chng2019icdar2019,
  title={ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text (RRC-ArT)},
  author={Chng, Chee-Kheng and Liu, Yuliang and Sun, Yipeng and Ng, Chun Chet and Luo, Canjie and Ni, Zihan and Fang, ChuanMing and Zhang, Shuaitao and Han, Junyu and Ding, Errui and others},
  journal={arXiv preprint arXiv:1909.07145},
  year={2019}
}

Comments