Ayoub Skouta, Abdelali Elmoufidi, Said Jai-Andaloussi, O. Ouchetto
{"title":"基于CNN和随机森林算法的眼底图像视网膜血管语义分割","authors":"Ayoub Skouta, Abdelali Elmoufidi, Said Jai-Andaloussi, O. Ouchetto","doi":"10.5220/0010911800003118","DOIUrl":null,"url":null,"abstract":"Abstract: In this paper, we present a new study to improve the automated segmentation of blood vessels in diabetic retinopathy images. Pre-processing is necessary due to the contrast between the blood vessels and the background, as well as the uneven illumination of the retinal images, in order to produce better quality data to be used in further processing. We use data augmentation techniques to increase the amount of accessible data in the dataset to overcome the data sparsity problem that deep learning requires. We then use the CNN VGG16 architecture to extract the feature from the preprocessed background images. The Random Forest method will then use the extracted attributes as input parameters. We used part of the augmented dataset to train the model (1764 images, representing the training set); the rest of the dataset will be used to test the model (196 images, representing the test set). Regarding the model validation phase, we used the dedicated part for testing the DRIVE dataset. Promising results compared to the state of the art were obtained. The method achieved an accuracy of 98.7%, a sensitivity of 97.4% and specificity of 99.5%. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.","PeriodicalId":72028,"journal":{"name":"... International Conference on Wearable and Implantable Body Sensor Networks. International Conference on Wearable and Implantable Body Sensor Networks","volume":"5 1","pages":"163-170"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Semantic Segmentation of Retinal Blood Vessels from Fundus Images by using CNN and the Random Forest Algorithm\",\"authors\":\"Ayoub Skouta, Abdelali Elmoufidi, Said Jai-Andaloussi, O. Ouchetto\",\"doi\":\"10.5220/0010911800003118\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract: In this paper, we present a new study to improve the automated segmentation of blood vessels in diabetic retinopathy images. Pre-processing is necessary due to the contrast between the blood vessels and the background, as well as the uneven illumination of the retinal images, in order to produce better quality data to be used in further processing. We use data augmentation techniques to increase the amount of accessible data in the dataset to overcome the data sparsity problem that deep learning requires. We then use the CNN VGG16 architecture to extract the feature from the preprocessed background images. The Random Forest method will then use the extracted attributes as input parameters. We used part of the augmented dataset to train the model (1764 images, representing the training set); the rest of the dataset will be used to test the model (196 images, representing the test set). Regarding the model validation phase, we used the dedicated part for testing the DRIVE dataset. Promising results compared to the state of the art were obtained. The method achieved an accuracy of 98.7%, a sensitivity of 97.4% and specificity of 99.5%. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.\",\"PeriodicalId\":72028,\"journal\":{\"name\":\"... International Conference on Wearable and Implantable Body Sensor Networks. International Conference on Wearable and Implantable Body Sensor Networks\",\"volume\":\"5 1\",\"pages\":\"163-170\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"... International Conference on Wearable and Implantable Body Sensor Networks. International Conference on Wearable and Implantable Body Sensor Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0010911800003118\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"... International Conference on Wearable and Implantable Body Sensor Networks. International Conference on Wearable and Implantable Body Sensor Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0010911800003118","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Semantic Segmentation of Retinal Blood Vessels from Fundus Images by using CNN and the Random Forest Algorithm
Abstract: In this paper, we present a new study to improve the automated segmentation of blood vessels in diabetic retinopathy images. Pre-processing is necessary due to the contrast between the blood vessels and the background, as well as the uneven illumination of the retinal images, in order to produce better quality data to be used in further processing. We use data augmentation techniques to increase the amount of accessible data in the dataset to overcome the data sparsity problem that deep learning requires. We then use the CNN VGG16 architecture to extract the feature from the preprocessed background images. The Random Forest method will then use the extracted attributes as input parameters. We used part of the augmented dataset to train the model (1764 images, representing the training set); the rest of the dataset will be used to test the model (196 images, representing the test set). Regarding the model validation phase, we used the dedicated part for testing the DRIVE dataset. Promising results compared to the state of the art were obtained. The method achieved an accuracy of 98.7%, a sensitivity of 97.4% and specificity of 99.5%. A comparison with some recent previous work in the literature has shown a significant advancement in our proposal.