{"title":"Plug-and-Play Deblurring for Robust Object Detection","authors":"Gerald Xie, Zhu Li, S. Bhattacharyya, A. Mehmood","doi":"10.1109/VCIP53242.2021.9675437","DOIUrl":null,"url":null,"abstract":"Object detection is a classic computer vision task, which learns the mapping between an image and object bounding boxes + class labels. Many applications of object detection involve images which are prone to degradation at capture time, notably motion blur from a moving camera like UAVs or object itself. One approach to handling this blur involves using common deblurring methods to recover the clean pixel images and then the apply vision task. This task is typically ill-posed. On top of this, application of these methods also add onto the inference time of the vision network, which can hinder performance of video inputs. To address the issues, we propose a novel plug-and-play (PnP) solution that insert deblurring features into the target vision task network without the need to retrain the task network. The deblur features are learned from a classification loss network on blur strength and directions, and the PnP scheme works well with the object detection network with minimum inference time complexity, compared with the state of the art deblur and then detection solution.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP53242.2021.9675437","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Object detection is a classic computer vision task, which learns the mapping between an image and object bounding boxes + class labels. Many applications of object detection involve images which are prone to degradation at capture time, notably motion blur from a moving camera like UAVs or object itself. One approach to handling this blur involves using common deblurring methods to recover the clean pixel images and then the apply vision task. This task is typically ill-posed. On top of this, application of these methods also add onto the inference time of the vision network, which can hinder performance of video inputs. To address the issues, we propose a novel plug-and-play (PnP) solution that insert deblurring features into the target vision task network without the need to retrain the task network. The deblur features are learned from a classification loss network on blur strength and directions, and the PnP scheme works well with the object detection network with minimum inference time complexity, compared with the state of the art deblur and then detection solution.