{"title":"Classification of SD-OCT images using a Deep learning approach","authors":"M. Awais, H. Müller, T. Tang, F. Mériaudeau","doi":"10.1109/ICSIPA.2017.8120661","DOIUrl":null,"url":null,"abstract":"Diabetic Macular Edema (DME) is one of the many eye diseases that is commonly found in diabetic patients. If it is left untreated it may cause vision loss. This paper focuses on classification of abnormal and normal OCT (Optical Coherence Tomography) image volumes using a pre-trained CNN (Convolutional Neural Network). Using VGG16 (Visual Geometry Group), features are extracted at different layers of the network, e.g. before fully connected layer and after each fully connected layer. On the basis of these features classification was performed using different classifiers and results are higher than recently published work on the same dataset with an accuracy of 87.5%, with sensitivity and specificity being 93.5% and 81% respectively.","PeriodicalId":268112,"journal":{"name":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"59","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSIPA.2017.8120661","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 59
Abstract
Diabetic Macular Edema (DME) is one of the many eye diseases that is commonly found in diabetic patients. If it is left untreated it may cause vision loss. This paper focuses on classification of abnormal and normal OCT (Optical Coherence Tomography) image volumes using a pre-trained CNN (Convolutional Neural Network). Using VGG16 (Visual Geometry Group), features are extracted at different layers of the network, e.g. before fully connected layer and after each fully connected layer. On the basis of these features classification was performed using different classifiers and results are higher than recently published work on the same dataset with an accuracy of 87.5%, with sensitivity and specificity being 93.5% and 81% respectively.