Fast Lung Image Segmentation Using Lightweight VAEL-Unet

Xiulan Hao, Chuanjin Zhang, Shiluo Xu
{"title":"Fast Lung Image Segmentation Using Lightweight VAEL-Unet","authors":"Xiulan Hao, Chuanjin Zhang, Shiluo Xu","doi":"10.4108/eetsis.4788","DOIUrl":null,"url":null,"abstract":"INTRODUCTION: A lightweght lung image segmentation model was explored. It was with fast speed and low resouces consumed while the accuracy was comparable to those SOAT models.\nOBJECTIVES: To improve the segmentation accuracy and computational efficiency of the model in extracting lung regions from chest X-ray images, a lightweight segmentation model enhanced with a visual attention mechanism called VAEL-Unet, was proposed.\nMETHODS: Firstly, the bneck module from the MobileNetV3 network was employed to replace the convolutional and pooling operations at different positions in the U-Net encoder, enabling the model to extract deeper-level features while reducing complexity and parameters. Secondly, an attention module was introduced during feature fusion, where the processed feature maps were sequentially fused with the corresponding positions in the decoder to obtain the segmented image.\nRESULTS: On ChestXray, the accuracy of VAEL-Unet improves from 97.37% in the traditional U-Net network to 97.69%, while the F1-score increases by 0.67%, 0.77%, 0.61%, and 1.03% compared to U-Net, SegNet, ResUnet and DeepLabV3+ networks. respectively. On LUNA dataset. the F1-score demonstrates improvements of 0.51%, 0.48%, 0.22% and 0.46%, respectively, while the accuracy has increased from 97.78% in the traditional U-Net model to 98.08% in the VAEL-Unet model. The training time of the VAEL-Unet is much less compared to other models. The number of parameters of VAEL-Unet is only 1.1M, significantly less than 32M of U-Net, 29M of SegNet, 48M of Res-Unet, 5.8M of DeeplabV3+ and 41M of DeepLabV3Plus_ResNet50. \nCONCLUSION: These results indicate that VAEL-Unet’s segmentation performance is slightly better than other referenced models while its training time and parameters are much less.","PeriodicalId":155438,"journal":{"name":"ICST Transactions on Scalable Information Systems","volume":"14 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICST Transactions on Scalable Information Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4108/eetsis.4788","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

INTRODUCTION: A lightweght lung image segmentation model was explored. It was with fast speed and low resouces consumed while the accuracy was comparable to those SOAT models. OBJECTIVES: To improve the segmentation accuracy and computational efficiency of the model in extracting lung regions from chest X-ray images, a lightweight segmentation model enhanced with a visual attention mechanism called VAEL-Unet, was proposed. METHODS: Firstly, the bneck module from the MobileNetV3 network was employed to replace the convolutional and pooling operations at different positions in the U-Net encoder, enabling the model to extract deeper-level features while reducing complexity and parameters. Secondly, an attention module was introduced during feature fusion, where the processed feature maps were sequentially fused with the corresponding positions in the decoder to obtain the segmented image. RESULTS: On ChestXray, the accuracy of VAEL-Unet improves from 97.37% in the traditional U-Net network to 97.69%, while the F1-score increases by 0.67%, 0.77%, 0.61%, and 1.03% compared to U-Net, SegNet, ResUnet and DeepLabV3+ networks. respectively. On LUNA dataset. the F1-score demonstrates improvements of 0.51%, 0.48%, 0.22% and 0.46%, respectively, while the accuracy has increased from 97.78% in the traditional U-Net model to 98.08% in the VAEL-Unet model. The training time of the VAEL-Unet is much less compared to other models. The number of parameters of VAEL-Unet is only 1.1M, significantly less than 32M of U-Net, 29M of SegNet, 48M of Res-Unet, 5.8M of DeeplabV3+ and 41M of DeepLabV3Plus_ResNet50. CONCLUSION: These results indicate that VAEL-Unet’s segmentation performance is slightly better than other referenced models while its training time and parameters are much less.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用轻量级 VAEL-Unet 进行快速肺部图像分割
简介:研究人员探索了一种轻量级肺部图像分割模型。该模型速度快、资源消耗低,而准确度与 SOAT 模型相当:方法:首先,采用 MobileNetV3 网络中的 bneck 模块取代 U-Net 编码器中不同位置的卷积和池化操作,使模型能够提取更深层次的特征,同时降低复杂度和参数。结果:在 ChestXray 上,与 U-Net、SegNet、ResUnet 和 DeepLabV3+ 网络相比,VAEL-Unet 的准确率从传统 U-Net 网络的 97.37% 提高到 97.69%,而 F1 分数分别提高了 0.67%、0.77%、0.61% 和 1.03%。在 LUNA 数据集上,F1 分数分别提高了 0.51%、0.48%、0.22% 和 0.46%,准确率从传统 U-Net 模型的 97.78% 提高到 VAEL-Unet 模型的 98.08%。与其他模型相比,VAEL-Unet 的训练时间更短。VAEL-Unet 的参数数仅为 1.1M,大大低于 U-Net 的 32M、SegNet 的 29M、Res-Unet 的 48M、DeeplabV3+ 的 5.8M 和 DeepLabV3Plus_ResNet50 的 41M。结论:这些结果表明,VAEL-Unet 的分割性能略优于其他参考模型,而其训练时间和参数却少得多。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Review on DDoS Attack in Controller Environment of Software Defined Network A novel color image encryption method using Fibonacci transformation and chaotic systems Eye Disease Detection Using Deep Learning Models with Transfer Learning Techniques FaceNet – A Framework for Age Variation Facial Digital Images JWTAMH: JSON Web Tokens Based Authentication Mechanism for HADOOP.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1