Prathmesh Madhu, Anna Meyer, Mathias Zinnen, Lara Mührenberg, Dirk Suckow, Torsten Bendschus, Corinna Reinhardt, Peter Bell, Ute Verstegen, Ronak Kosti, A. Maier, V. Christlein
{"title":"One-Shot Object Detection in Heterogeneous Artwork Datasets","authors":"Prathmesh Madhu, Anna Meyer, Mathias Zinnen, Lara Mührenberg, Dirk Suckow, Torsten Bendschus, Corinna Reinhardt, Peter Bell, Ute Verstegen, Ronak Kosti, A. Maier, V. Christlein","doi":"10.1109/IPTA54936.2022.9784141","DOIUrl":null,"url":null,"abstract":"Christian archeologists face many challenges in understanding visual narration through artwork images. This understanding is essential to access underlying semantic in-formation. Therefore, narrative elements (objects) need to be labeled, compared, and contextualized by experts, which takes an enormous amount of time and effort. Our work aims to reduce labeling costs by using one-shot object detection to generate a labeled database from unannotated images. Novel object categories can be defined broadly and annotated using visual examples of narrative elements without training exclusively for such objects. In this work, we propose two ways of using contextual information as data augmentation to improve the detection performance. Furthermore, we introduce a multi-relation detector to our framework, which extracts global, local, and patch-based relations of the image. Additionally, we evaluate the use of contrastive learning. We use data from Christian archeology (CHA) and art history - IconArt-v2 (IA). Our context encoding approach improves the typical fine-tuning approach in terms of mean average precision (mAP) by about 3.5 % (4 %) at 0.25 intersection over union (IoU) for UnSeen categories, and 6 % (1.5 %) for Seen categories in CHA (IA). To the best of our knowledge, our work is the first to explore few shot object detection on heterogeneous artistic data by investigating evaluation methods and data augmentation strategies. We will release the code and models after acceptance of the work.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"254 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IPTA54936.2022.9784141","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Christian archeologists face many challenges in understanding visual narration through artwork images. This understanding is essential to access underlying semantic in-formation. Therefore, narrative elements (objects) need to be labeled, compared, and contextualized by experts, which takes an enormous amount of time and effort. Our work aims to reduce labeling costs by using one-shot object detection to generate a labeled database from unannotated images. Novel object categories can be defined broadly and annotated using visual examples of narrative elements without training exclusively for such objects. In this work, we propose two ways of using contextual information as data augmentation to improve the detection performance. Furthermore, we introduce a multi-relation detector to our framework, which extracts global, local, and patch-based relations of the image. Additionally, we evaluate the use of contrastive learning. We use data from Christian archeology (CHA) and art history - IconArt-v2 (IA). Our context encoding approach improves the typical fine-tuning approach in terms of mean average precision (mAP) by about 3.5 % (4 %) at 0.25 intersection over union (IoU) for UnSeen categories, and 6 % (1.5 %) for Seen categories in CHA (IA). To the best of our knowledge, our work is the first to explore few shot object detection on heterogeneous artistic data by investigating evaluation methods and data augmentation strategies. We will release the code and models after acceptance of the work.