Laura Dörr, Felix Brandt, Anne Meyer, Martin Pouls
{"title":"Lean Training Data Generation for Planar Object Detection Models in Unsteady Logistics Contexts","authors":"Laura Dörr, Felix Brandt, Anne Meyer, Martin Pouls","doi":"10.1109/ICMLA.2019.00062","DOIUrl":null,"url":null,"abstract":"Supervised deep learning has become the state of the art method for object detection and is used in many application areas such as autonomous driving, manufacturing industries or security systems. The acquisition of annotated data sets for the training of neural networks is highly time-consuming and error-prone. Thus, the supervised training of such object detection models is not feasible in some cases. This holds for the task of logistics transport label detection, as this use-case stands out by requiring highly specialized, quickly adapting models whilst allowing for little manual efforts in the data preparation and training process. We propose an easy training data generation method enabling the fully automated training of specialized models for the task of logistics transport label detection. For data synthesis, we stitch instances of the transport labels to be detected into background images whilst using image degradation and augmentation methods. We evaluate the employment of both use-case-specific, carefully selected background images and randomly selected real-world background images. Further, we compare two different data generation approaches: one generating realistically looking images and a simpler one making do without any manual image annotation. We examine and evaluate the introduced method on a new and publicly available example data set relevant for logistics transport label detection. We show that accurate models can be trained exclusively on synthetic training data and we compare their performance to models trained on real, manually annotated images.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2019.00062","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Supervised deep learning has become the state of the art method for object detection and is used in many application areas such as autonomous driving, manufacturing industries or security systems. The acquisition of annotated data sets for the training of neural networks is highly time-consuming and error-prone. Thus, the supervised training of such object detection models is not feasible in some cases. This holds for the task of logistics transport label detection, as this use-case stands out by requiring highly specialized, quickly adapting models whilst allowing for little manual efforts in the data preparation and training process. We propose an easy training data generation method enabling the fully automated training of specialized models for the task of logistics transport label detection. For data synthesis, we stitch instances of the transport labels to be detected into background images whilst using image degradation and augmentation methods. We evaluate the employment of both use-case-specific, carefully selected background images and randomly selected real-world background images. Further, we compare two different data generation approaches: one generating realistically looking images and a simpler one making do without any manual image annotation. We examine and evaluate the introduced method on a new and publicly available example data set relevant for logistics transport label detection. We show that accurate models can be trained exclusively on synthetic training data and we compare their performance to models trained on real, manually annotated images.