Outdoor scenes often suffer from insufficient and non-uniform illumination, leading to object detection (OD) failures. This issue has garnered research attention, with the mainstream solution being to improve the model’s feature extraction capability through cascaded feature enhancement modules. However, such approaches increase the model’s complexity and the enhancement effect is highly dependent on the similarity between the training and testing data. Alternatively, some methods incorporate parallel low-light image enhancement (LLE) modules to guide the training of object detection models. Nevertheless, due to the lack of object detection datasets containing paired bright and low-light images, these methods often require manually selecting appropriate pre-trained LLE models for different scenes, making end-to-end training challenging. In this paper, we aim to build an end-to-end LLE&OD cascade multitask model that leverages the strengths of both approaches. We use a new data augmentation techniques to synthesize low-light images from normal-light object detection datasets. To mutually train the cascade model, a new self-guided loss is designed. By deconstruction and reorganization of the multitask model, the self-guided loss effectively steering the model away from local optima for single tasks, enabling the model to achieve superior performance compared to many state-of-the-art methods on several publicly available night scene datasets, as well as on a daytime scene dataset. The source code of the proposed method will be available at https://github.com/225ceV/SGC.
扫码关注我们
求助内容:
应助结果提醒方式:
