{"title":"Research on GAN-based Container Code Images Generation Method","authors":"Yan-Cheng Liang, Hanbing Yao","doi":"10.1109/DCABES50732.2020.00059","DOIUrl":null,"url":null,"abstract":"Recognizing images based on deep learning algorithms requires sufficient samples as a training dataset. In the port field, there is also a lack of container image datasets for deep learning research. This paper proposes a model based on GAN's container box character sample extended dataset (C-SAGAN), and addresses the problems of container box code character defaced and corrupt caused by the port environment, the generative adversarial network is trained with a small amount of real images to generate container character samples. The C-SAGAN model introduces class tags and self-attention in the generator and discriminator. The class tags can control the image generation process. The self-attention mechanism can extract image features based on global information and generate image samples with clear details. The experimental results show that the quality of the samples generated by the generative adversarial network model proposed in this paper is excellent. The samples are used in the CRNN model as the training dataset and the real images are used as the test sets, won the high recognition rate.","PeriodicalId":351404,"journal":{"name":"2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 19th International Symposium on Distributed Computing and Applications for Business Engineering and Science (DCABES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DCABES50732.2020.00059","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Recognizing images based on deep learning algorithms requires sufficient samples as a training dataset. In the port field, there is also a lack of container image datasets for deep learning research. This paper proposes a model based on GAN's container box character sample extended dataset (C-SAGAN), and addresses the problems of container box code character defaced and corrupt caused by the port environment, the generative adversarial network is trained with a small amount of real images to generate container character samples. The C-SAGAN model introduces class tags and self-attention in the generator and discriminator. The class tags can control the image generation process. The self-attention mechanism can extract image features based on global information and generate image samples with clear details. The experimental results show that the quality of the samples generated by the generative adversarial network model proposed in this paper is excellent. The samples are used in the CRNN model as the training dataset and the real images are used as the test sets, won the high recognition rate.