Maximilian Menke , Thomas Wenzel , Andreas Schwung
{"title":"AWADA: Foreground-focused adversarial learning for cross-domain object detection","authors":"Maximilian Menke , Thomas Wenzel , Andreas Schwung","doi":"10.1016/j.cviu.2024.104153","DOIUrl":null,"url":null,"abstract":"<div><div>Object detection networks have achieved impressive results, but it can be challenging to replicate this success in practical applications due to a lack of relevant data specific to the task. Typically, additional data sources are used to support the training process. However, the domain gaps between these data sources present a challenge. Adversarial image-to-image style transfer is often used to bridge this gap, but it is not directly connected to the object detection task and can be unstable. We propose AWADA, a framework that combines attention-weighted adversarial domain adaptation connecting style transfer and object detection. By using object detector proposals to create attention maps for foreground objects, we focus the style transfer on these regions and stabilize the training process. Our results demonstrate that AWADA can reach state-of-the-art unsupervised domain adaptation performance in three commonly used benchmarks.</div></div>","PeriodicalId":50633,"journal":{"name":"Computer Vision and Image Understanding","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Vision and Image Understanding","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1077314224002340","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Object detection networks have achieved impressive results, but it can be challenging to replicate this success in practical applications due to a lack of relevant data specific to the task. Typically, additional data sources are used to support the training process. However, the domain gaps between these data sources present a challenge. Adversarial image-to-image style transfer is often used to bridge this gap, but it is not directly connected to the object detection task and can be unstable. We propose AWADA, a framework that combines attention-weighted adversarial domain adaptation connecting style transfer and object detection. By using object detector proposals to create attention maps for foreground objects, we focus the style transfer on these regions and stabilize the training process. Our results demonstrate that AWADA can reach state-of-the-art unsupervised domain adaptation performance in three commonly used benchmarks.
期刊介绍:
The central focus of this journal is the computer analysis of pictorial information. Computer Vision and Image Understanding publishes papers covering all aspects of image analysis from the low-level, iconic processes of early vision to the high-level, symbolic processes of recognition and interpretation. A wide range of topics in the image understanding area is covered, including papers offering insights that differ from predominant views.
Research Areas Include:
• Theory
• Early vision
• Data structures and representations
• Shape
• Range
• Motion
• Matching and recognition
• Architecture and languages
• Vision systems