Ming Zeng, Jiansheng Fang, Hanpei Miao, Tianyang Zhang, Jiang Liu
{"title":"A Multi-Scale Self-Attention Network for Diabetic Retinopathy Retrieval","authors":"Ming Zeng, Jiansheng Fang, Hanpei Miao, Tianyang Zhang, Jiang Liu","doi":"10.1145/3484274.3484290","DOIUrl":null,"url":null,"abstract":"Diabetic retinopathy (DR), a complication due to diabetes, is a common cause of progressive damage to the retina. The mass screening of populations for DR is time-consuming. Therefore, computerized diagnosis is of great significance in the clinical practice, which providing evidence to assist clinicians in decision making. Specifically, hemorrhages, microaneurysms, hard exudates, soft exudates, and other lesions are verified to be closely associated with DR. These lesions, however, are scattered in different positions and sizes in fundus images, the internal relation of which are hard to be reserved in the ultimate features due to a large number of convolution layers that reduce the detail characteristics. In this paper, we present a deep-learning network with a multi-scale self-attention module to aggregate the global context to learned features for DR image retrieval. The multi-scale fusion enhances, in terms of scale, the efficacious latent relation of different positions in features explored by the self-attention. For the experiment, the proposed network is validated on the Kaggle DR dataset, and the result shows that it achieves state-of-the-art performance.","PeriodicalId":143540,"journal":{"name":"Proceedings of the 4th International Conference on Control and Computer Vision","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Control and Computer Vision","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3484274.3484290","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Diabetic retinopathy (DR), a complication due to diabetes, is a common cause of progressive damage to the retina. The mass screening of populations for DR is time-consuming. Therefore, computerized diagnosis is of great significance in the clinical practice, which providing evidence to assist clinicians in decision making. Specifically, hemorrhages, microaneurysms, hard exudates, soft exudates, and other lesions are verified to be closely associated with DR. These lesions, however, are scattered in different positions and sizes in fundus images, the internal relation of which are hard to be reserved in the ultimate features due to a large number of convolution layers that reduce the detail characteristics. In this paper, we present a deep-learning network with a multi-scale self-attention module to aggregate the global context to learned features for DR image retrieval. The multi-scale fusion enhances, in terms of scale, the efficacious latent relation of different positions in features explored by the self-attention. For the experiment, the proposed network is validated on the Kaggle DR dataset, and the result shows that it achieves state-of-the-art performance.