Kieu Dang Nam, Thi-Oanh Nguyen, N. T. Thuy, D. V. Hang, D. Long, Tran Quang Trung, D. V. Sang
{"title":"交叉模式息肉分割中一种粗到精的无监督域自适应方法","authors":"Kieu Dang Nam, Thi-Oanh Nguyen, N. T. Thuy, D. V. Hang, D. Long, Tran Quang Trung, D. V. Sang","doi":"10.1109/KSE56063.2022.9953621","DOIUrl":null,"url":null,"abstract":"The goal of the Unsupervised Domain Adaptation (UDA) is to transfer the knowledge of the model learned from a source domain with available labels to the target data domain without having access to labels. However, the performance of UDA can greatly suffer from the domain shift issue caused by the misalignment of the two data distributions from the two data sources. Endoscopy can be performed under different light modes, including white-light imaging (WLI) and image-enhanced endoscopy (IEE) light modes. However, most of the current polyp datasets are collected in the WLI mode since it is the standard and most popular one in all endoscopy systems. Therefore, AI models trained on such WLI datasets can strongly degrade when applied to other light modes. In order to address this issue, this paper proposes a coarse-to-fine UDA method that first coarsely aligns the two data distributions at the input level using the Fourier transform in chromatic space; then finely aligns them at the feature level using a fine-grained adversarial training. The backbone of our model is based on a powerful transformer architecture. Experimental results show that our proposed method effectively solves the domain shift issue and achieves a substantial performance improvement on cross-mode polyp segmentation for endoscopy.","PeriodicalId":330865,"journal":{"name":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Coarse-to-fine Unsupervised Domain Adaptation Method for Cross-Mode Polyp Segmentation\",\"authors\":\"Kieu Dang Nam, Thi-Oanh Nguyen, N. T. Thuy, D. V. Hang, D. Long, Tran Quang Trung, D. V. Sang\",\"doi\":\"10.1109/KSE56063.2022.9953621\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The goal of the Unsupervised Domain Adaptation (UDA) is to transfer the knowledge of the model learned from a source domain with available labels to the target data domain without having access to labels. However, the performance of UDA can greatly suffer from the domain shift issue caused by the misalignment of the two data distributions from the two data sources. Endoscopy can be performed under different light modes, including white-light imaging (WLI) and image-enhanced endoscopy (IEE) light modes. However, most of the current polyp datasets are collected in the WLI mode since it is the standard and most popular one in all endoscopy systems. Therefore, AI models trained on such WLI datasets can strongly degrade when applied to other light modes. In order to address this issue, this paper proposes a coarse-to-fine UDA method that first coarsely aligns the two data distributions at the input level using the Fourier transform in chromatic space; then finely aligns them at the feature level using a fine-grained adversarial training. The backbone of our model is based on a powerful transformer architecture. Experimental results show that our proposed method effectively solves the domain shift issue and achieves a substantial performance improvement on cross-mode polyp segmentation for endoscopy.\",\"PeriodicalId\":330865,\"journal\":{\"name\":\"2022 14th International Conference on Knowledge and Systems Engineering (KSE)\",\"volume\":\"99 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 14th International Conference on Knowledge and Systems Engineering (KSE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/KSE56063.2022.9953621\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Knowledge and Systems Engineering (KSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KSE56063.2022.9953621","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Coarse-to-fine Unsupervised Domain Adaptation Method for Cross-Mode Polyp Segmentation
The goal of the Unsupervised Domain Adaptation (UDA) is to transfer the knowledge of the model learned from a source domain with available labels to the target data domain without having access to labels. However, the performance of UDA can greatly suffer from the domain shift issue caused by the misalignment of the two data distributions from the two data sources. Endoscopy can be performed under different light modes, including white-light imaging (WLI) and image-enhanced endoscopy (IEE) light modes. However, most of the current polyp datasets are collected in the WLI mode since it is the standard and most popular one in all endoscopy systems. Therefore, AI models trained on such WLI datasets can strongly degrade when applied to other light modes. In order to address this issue, this paper proposes a coarse-to-fine UDA method that first coarsely aligns the two data distributions at the input level using the Fourier transform in chromatic space; then finely aligns them at the feature level using a fine-grained adversarial training. The backbone of our model is based on a powerful transformer architecture. Experimental results show that our proposed method effectively solves the domain shift issue and achieves a substantial performance improvement on cross-mode polyp segmentation for endoscopy.