Wireless capsule endoscopy (WCE) is a noninvasive method to visualize the inside of the digestive tract. A WCE image frequently has low contrast, variable lightness, and poor visibility due to the camera's limited capabilities, and traditional enhancing techniques are difficult to apply in many situations. Most approaches use predetermined parameters and ignore intrinsic information; they are unable to maintain true color without introducing incorrect data. The proposed method transfers the colors using a generative adversarial network (GAN) and an unsupervised image-to-image translation (UNIT) model. The proposed model proposes an adaptive four-discriminator UNIT (Ada4D-U) designed to learn the translation between two visual domains. It consists of one generator () and four adaptive . Two are used for adaptive color adjustment, and another two are used for adaptive feature mapping. Two WCE datasets, Kvasir and Red Lesion (RL), are used to evaluate enhanced image quality via reference and non-reference metrics. The proposed approach performs better in terms of image quality and the structural similarity index. The Frechet inception distance (FID) metric is used to measure how much the proposed UNIT model has improved. The proposed method is applied as a pre-processing step for WCE tasks, including bleeding lesion detection and lesion segmentation, and its effectiveness is demonstrated on the RL dataset. The performance increase in segmentation and detection is analyzed using metrics such as accuracy, F1 score, dice coefficient, Jaccard index, and so forth.
扫码关注我们
求助内容:
应助结果提醒方式:
