医学图像分类:深度预训练神经网络的比较

D. O. Alebiosu, Fermi Pasha Muhammad
{"title":"医学图像分类:深度预训练神经网络的比较","authors":"D. O. Alebiosu, Fermi Pasha Muhammad","doi":"10.1109/SCORED.2019.8896277","DOIUrl":null,"url":null,"abstract":"Medical image classification is an important step in the effective and accurate retrieval of medical images from large digital database where they are stored. This paper examines the effectiveness of using domain transferred neural networks (DCNNs) for classification of medical X-ray images. We employed two different convolutional neural network (CNN) architectures. VGGNet-16 and AlexNet pre-trained on ImageNet, a non- medical image database consisting of over 1.2 million scenery images were used for the classification task. The pre-trained networks served both as feature extractors and as fine-tuned networks. The extracted feature vector was used to train a linear support vector machine (SVM) to generate a model for the classification task. The fine-tuning process was done by replacing and retraining the last fully connected layers through backward propagation. Our method was evaluated on ImageCLEF2007 medical database. The database consist of 11,000 medical X-ray images (training dataset) and 1,000 images (testing dataset) classified into 116 categories. We compared the performance of the two networks both as feature generators and as fine-tuned networks on our dataset. The overall classification accuracy across all the 116 image classes shows that VGGNet-16 + SVM produced 79.6% and 85.77% as fine-tuned network. AlexNet + SVM produced a total classification accuracy of 84.27% and as a fine-tuned network produced a total of 86.47% which is the highest among the four techniques across all the 116 image classes. This study shows that the employment of a shallower pre-trained neural network such as AlexNet learn features that are more generalizable compared to deeper networkers such as VGGNet-16 and has a greater capability of increasing classification accuracy of medical image database. Though the pre-trained AlexNet outperformed VGGNet-16 in both ways, it can be noted that some image classes from the same sub-body region are difficult to classify accurately. This is as a result of inter-class similarity that exists among the images.","PeriodicalId":231004,"journal":{"name":"2019 IEEE Student Conference on Research and Development (SCOReD)","volume":"43 7","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Medical Image Classification: A Comparison of Deep Pre-trained Neural Networks\",\"authors\":\"D. O. Alebiosu, Fermi Pasha Muhammad\",\"doi\":\"10.1109/SCORED.2019.8896277\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Medical image classification is an important step in the effective and accurate retrieval of medical images from large digital database where they are stored. This paper examines the effectiveness of using domain transferred neural networks (DCNNs) for classification of medical X-ray images. We employed two different convolutional neural network (CNN) architectures. VGGNet-16 and AlexNet pre-trained on ImageNet, a non- medical image database consisting of over 1.2 million scenery images were used for the classification task. The pre-trained networks served both as feature extractors and as fine-tuned networks. The extracted feature vector was used to train a linear support vector machine (SVM) to generate a model for the classification task. The fine-tuning process was done by replacing and retraining the last fully connected layers through backward propagation. Our method was evaluated on ImageCLEF2007 medical database. The database consist of 11,000 medical X-ray images (training dataset) and 1,000 images (testing dataset) classified into 116 categories. We compared the performance of the two networks both as feature generators and as fine-tuned networks on our dataset. The overall classification accuracy across all the 116 image classes shows that VGGNet-16 + SVM produced 79.6% and 85.77% as fine-tuned network. AlexNet + SVM produced a total classification accuracy of 84.27% and as a fine-tuned network produced a total of 86.47% which is the highest among the four techniques across all the 116 image classes. This study shows that the employment of a shallower pre-trained neural network such as AlexNet learn features that are more generalizable compared to deeper networkers such as VGGNet-16 and has a greater capability of increasing classification accuracy of medical image database. Though the pre-trained AlexNet outperformed VGGNet-16 in both ways, it can be noted that some image classes from the same sub-body region are difficult to classify accurately. This is as a result of inter-class similarity that exists among the images.\",\"PeriodicalId\":231004,\"journal\":{\"name\":\"2019 IEEE Student Conference on Research and Development (SCOReD)\",\"volume\":\"43 7\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE Student Conference on Research and Development (SCOReD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SCORED.2019.8896277\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Student Conference on Research and Development (SCOReD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SCORED.2019.8896277","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

摘要

医学图像分类是有效、准确地从存储医学图像的大型数字数据库中检索医学图像的重要步骤。本文研究了使用域转移神经网络(DCNNs)对医学x射线图像进行分类的有效性。我们采用了两种不同的卷积神经网络(CNN)架构。VGGNet-16和AlexNet在ImageNet上进行了预训练,ImageNet是一个由120多万张风景图像组成的非医学图像数据库。预训练的网络既可以作为特征提取器,也可以作为微调网络。提取的特征向量用于训练线性支持向量机(SVM),生成用于分类任务的模型。微调过程是通过反向传播替换和重新训练最后一个完全连接的层来完成的。我们的方法在ImageCLEF2007医学数据库上进行了评估。该数据库由11,000张医学x射线图像(训练数据集)和1,000张图像(测试数据集)组成,分为116个类别。我们比较了两种网络作为特征生成器和作为我们数据集上的微调网络的性能。在所有116个图像类别中,VGGNet-16 + SVM的总体分类准确率显示,作为微调网络,VGGNet-16 + SVM的分类准确率分别为79.6%和85.77%。AlexNet + SVM的总分类准确率为84.27%,作为一个微调网络,总分类准确率为86.47%,在所有116个图像类别中,这是四种技术中最高的。本研究表明,与VGGNet-16等深层神经网络相比,AlexNet等较浅的预训练神经网络学习到的特征具有更强的泛化性,并且具有更大的提高医学图像数据库分类精度的能力。虽然预训练的AlexNet在这两方面都优于VGGNet-16,但可以注意到,来自同一子体区域的一些图像类难以准确分类。这是由于图像之间存在着阶级间的相似性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Medical Image Classification: A Comparison of Deep Pre-trained Neural Networks
Medical image classification is an important step in the effective and accurate retrieval of medical images from large digital database where they are stored. This paper examines the effectiveness of using domain transferred neural networks (DCNNs) for classification of medical X-ray images. We employed two different convolutional neural network (CNN) architectures. VGGNet-16 and AlexNet pre-trained on ImageNet, a non- medical image database consisting of over 1.2 million scenery images were used for the classification task. The pre-trained networks served both as feature extractors and as fine-tuned networks. The extracted feature vector was used to train a linear support vector machine (SVM) to generate a model for the classification task. The fine-tuning process was done by replacing and retraining the last fully connected layers through backward propagation. Our method was evaluated on ImageCLEF2007 medical database. The database consist of 11,000 medical X-ray images (training dataset) and 1,000 images (testing dataset) classified into 116 categories. We compared the performance of the two networks both as feature generators and as fine-tuned networks on our dataset. The overall classification accuracy across all the 116 image classes shows that VGGNet-16 + SVM produced 79.6% and 85.77% as fine-tuned network. AlexNet + SVM produced a total classification accuracy of 84.27% and as a fine-tuned network produced a total of 86.47% which is the highest among the four techniques across all the 116 image classes. This study shows that the employment of a shallower pre-trained neural network such as AlexNet learn features that are more generalizable compared to deeper networkers such as VGGNet-16 and has a greater capability of increasing classification accuracy of medical image database. Though the pre-trained AlexNet outperformed VGGNet-16 in both ways, it can be noted that some image classes from the same sub-body region are difficult to classify accurately. This is as a result of inter-class similarity that exists among the images.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Preliminary Implementation of the Next Generation Cannulation Simulator Multi-Input Power Converter for Renewable Energy Sources using Active Current Sharing Schemes Understanding the differences in students' attitudes towards social media use: A case study from Oman Effect Of Neurofeedback 2D and 3D Stimulus Content On Stress Mitigation Investigation of Segmented Rotor FEFSSM with Non-Overlap Windings in Various Slot-Pole Configurations
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1