{"title":"Arbitrary Direction Inkjet Character Recognition Based on Spatial Transformation","authors":"Wentao Cai, Hao Zhao, Heng Wang, Xue Deng","doi":"10.1109/CCISP55629.2022.9974507","DOIUrl":null,"url":null,"abstract":"Aiming at the problem of the low recognition accuracy caused by the arbitrary characters. In this paper, we propose an arbitrary direction character recognition network. Firstly, a lightweight spatial transformation network (STNet) is designed based on the MobileNetV2, which is used to extract the spatial features of the arbitrary characters and perform spatial transformation. Simultaneously, we introduced the SE attention block into the feature extraction backbone network, which makes the network focuses on the key regions of characters. Then, we build a text recognizer based on recurrent neural network and introduce the Connectionist Temporal Classification (CTC) loss to achieve the flexible alignment between the visual features and the prediction outputs. Extensive experiments are carried out on the IIIT5K and a self-made inkjet characters dataset. The recognition accuracy of our proposed method reaches 95.7% and 86.3% respectively. Compared with the benchmarks, the maximum accuracy of the proposed method is improved by 17.5%. Experimental results show the effectiveness of our proposed method.","PeriodicalId":431851,"journal":{"name":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Communication, Image and Signal Processing (CCISP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCISP55629.2022.9974507","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Aiming at the problem of the low recognition accuracy caused by the arbitrary characters. In this paper, we propose an arbitrary direction character recognition network. Firstly, a lightweight spatial transformation network (STNet) is designed based on the MobileNetV2, which is used to extract the spatial features of the arbitrary characters and perform spatial transformation. Simultaneously, we introduced the SE attention block into the feature extraction backbone network, which makes the network focuses on the key regions of characters. Then, we build a text recognizer based on recurrent neural network and introduce the Connectionist Temporal Classification (CTC) loss to achieve the flexible alignment between the visual features and the prediction outputs. Extensive experiments are carried out on the IIIT5K and a self-made inkjet characters dataset. The recognition accuracy of our proposed method reaches 95.7% and 86.3% respectively. Compared with the benchmarks, the maximum accuracy of the proposed method is improved by 17.5%. Experimental results show the effectiveness of our proposed method.