Miao Yao, Yijing Lu, Jinteng Mou, Chen Yan, Dongjingdian Liu
{"title":"基于可学习Retinex的低光城市环境端到端自适应目标检测","authors":"Miao Yao, Yijing Lu, Jinteng Mou, Chen Yan, Dongjingdian Liu","doi":"10.1080/10589759.2023.2274011","DOIUrl":null,"url":null,"abstract":"ABSTRACTIn the smart city context, efficient urban surveillance under low-light conditions is crucial. Accurate object detection in dimly lit areas is vital for safety and nighttime driving. However, subpar, poorly lit images due to environmental or equipment limitations pose a challenge, affecting precision in tasks like object detection and segmentation. Existing solutions often involve time-consuming, inefficient image preprocessing and lack strong theoretical support for low-light city image enhancement. To address these issue, we propose an end-to-end pipeline named LAR-YOLO that leverages convolutional network to extract a set of image transformation parameters, and implements the Retinex theory to proficiently elevate the quality of the image. Unlike conventional approaches, this innovative method eliminates the need for hand-crafted parameters and can adaptively enhance each low-light image. Additionally, due to a restricted quantity of training data, the detection model may not achieve an adequate level of expertise to enhance detection accuracy. To tackle this challenge, we introduce a cross-domain learning approach that supplements the low-light model with knowledge from normal light scenarios. Our proof-of-principle experiments and ablation studies utilising ExDark and VOC datasets demonstrate that our proposed method outperforms similar low-light object detection algorithms by approximately 13% in terms of accuracy.KEYWORDS: Object detectionsmart cityRetinex theorylow-light image processingcross-domain learning AcknowledgmentsThis work was supported by the National Natural Science Foundation of China under Grant Nos. 62272462 and 51904294.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by the National Natural Science Foundation of China [51904294]; National Natural Science Foundation of China [62272462].","PeriodicalId":49746,"journal":{"name":"Nondestructive Testing and Evaluation","volume":"10 6","pages":"0"},"PeriodicalIF":3.0000,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"End-to-end adaptive object detection with learnable Retinex for low-light city environment\",\"authors\":\"Miao Yao, Yijing Lu, Jinteng Mou, Chen Yan, Dongjingdian Liu\",\"doi\":\"10.1080/10589759.2023.2274011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACTIn the smart city context, efficient urban surveillance under low-light conditions is crucial. Accurate object detection in dimly lit areas is vital for safety and nighttime driving. However, subpar, poorly lit images due to environmental or equipment limitations pose a challenge, affecting precision in tasks like object detection and segmentation. Existing solutions often involve time-consuming, inefficient image preprocessing and lack strong theoretical support for low-light city image enhancement. To address these issue, we propose an end-to-end pipeline named LAR-YOLO that leverages convolutional network to extract a set of image transformation parameters, and implements the Retinex theory to proficiently elevate the quality of the image. Unlike conventional approaches, this innovative method eliminates the need for hand-crafted parameters and can adaptively enhance each low-light image. Additionally, due to a restricted quantity of training data, the detection model may not achieve an adequate level of expertise to enhance detection accuracy. To tackle this challenge, we introduce a cross-domain learning approach that supplements the low-light model with knowledge from normal light scenarios. Our proof-of-principle experiments and ablation studies utilising ExDark and VOC datasets demonstrate that our proposed method outperforms similar low-light object detection algorithms by approximately 13% in terms of accuracy.KEYWORDS: Object detectionsmart cityRetinex theorylow-light image processingcross-domain learning AcknowledgmentsThis work was supported by the National Natural Science Foundation of China under Grant Nos. 62272462 and 51904294.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by the National Natural Science Foundation of China [51904294]; National Natural Science Foundation of China [62272462].\",\"PeriodicalId\":49746,\"journal\":{\"name\":\"Nondestructive Testing and Evaluation\",\"volume\":\"10 6\",\"pages\":\"0\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-11-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Nondestructive Testing and Evaluation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/10589759.2023.2274011\",\"RegionNum\":3,\"RegionCategory\":\"材料科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, CHARACTERIZATION & TESTING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nondestructive Testing and Evaluation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/10589759.2023.2274011","RegionNum":3,"RegionCategory":"材料科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, CHARACTERIZATION & TESTING","Score":null,"Total":0}
End-to-end adaptive object detection with learnable Retinex for low-light city environment
ABSTRACTIn the smart city context, efficient urban surveillance under low-light conditions is crucial. Accurate object detection in dimly lit areas is vital for safety and nighttime driving. However, subpar, poorly lit images due to environmental or equipment limitations pose a challenge, affecting precision in tasks like object detection and segmentation. Existing solutions often involve time-consuming, inefficient image preprocessing and lack strong theoretical support for low-light city image enhancement. To address these issue, we propose an end-to-end pipeline named LAR-YOLO that leverages convolutional network to extract a set of image transformation parameters, and implements the Retinex theory to proficiently elevate the quality of the image. Unlike conventional approaches, this innovative method eliminates the need for hand-crafted parameters and can adaptively enhance each low-light image. Additionally, due to a restricted quantity of training data, the detection model may not achieve an adequate level of expertise to enhance detection accuracy. To tackle this challenge, we introduce a cross-domain learning approach that supplements the low-light model with knowledge from normal light scenarios. Our proof-of-principle experiments and ablation studies utilising ExDark and VOC datasets demonstrate that our proposed method outperforms similar low-light object detection algorithms by approximately 13% in terms of accuracy.KEYWORDS: Object detectionsmart cityRetinex theorylow-light image processingcross-domain learning AcknowledgmentsThis work was supported by the National Natural Science Foundation of China under Grant Nos. 62272462 and 51904294.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by the National Natural Science Foundation of China [51904294]; National Natural Science Foundation of China [62272462].
期刊介绍:
Nondestructive Testing and Evaluation publishes the results of research and development in the underlying theory, novel techniques and applications of nondestructive testing and evaluation in the form of letters, original papers and review articles.
Articles concerning both the investigation of physical processes and the development of mechanical processes and techniques are welcomed. Studies of conventional techniques, including radiography, ultrasound, eddy currents, magnetic properties and magnetic particle inspection, thermal imaging and dye penetrant, will be considered in addition to more advanced approaches using, for example, lasers, squid magnetometers, interferometers, synchrotron and neutron beams and Compton scattering.
Work on the development of conventional and novel transducers is particularly welcomed. In addition, articles are invited on general aspects of nondestructive testing and evaluation in education, training, validation and links with engineering.