Efficient Brain Tumor Segmentation with Lightweight Separable Spatial Convolutional Network

IF 5.2 3区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on Multimedia Computing Communications and Applications Pub Date : 2024-03-23 DOI:10.1145/3653715
Hao Zhang, Meng Liu, Yuan Qi, Yang Ning, Shunbo Hu, Liqiang Nie, Wenyin Zhang
{"title":"Efficient Brain Tumor Segmentation with Lightweight Separable Spatial Convolutional Network","authors":"Hao Zhang, Meng Liu, Yuan Qi, Yang Ning, Shunbo Hu, Liqiang Nie, Wenyin Zhang","doi":"10.1145/3653715","DOIUrl":null,"url":null,"abstract":"<p>Accurate and automated segmentation of lesions in brain MRI scans is crucial in diagnostics and treatment planning. Despite the significant achievements of existing approaches, they often require substantial computational resources and fail to fully exploit the synergy between low-level and high-level features. To address these challenges, we introduce the Separable Spatial Convolutional Network (SSCN), an innovative model that refines the U-Net architecture to achieve efficient brain tumor segmentation with minimal computational cost. SSCN integrates the PocketNet paradigm and replaces standard convolutions with depthwise separable convolutions, resulting in a significant reduction in parameters and computational load. Additionally, our feature complementary module enhances the interaction between features across the encoder-decoder structure, facilitating the integration of multi-scale features while maintaining low computational demands. The model also incorporates a separable spatial attention mechanism, enhancing its capability to discern spatial details. Empirical validations on standard datasets demonstrate the effectiveness of our proposed model, especially in segmenting small and medium-sized tumors, with only 0.27M parameters and 3.68GFlops. Our code is available at https://github.com/zzpr/SSCN.</p>","PeriodicalId":50937,"journal":{"name":"ACM Transactions on Multimedia Computing Communications and Applications","volume":"103 1","pages":""},"PeriodicalIF":5.2000,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Multimedia Computing Communications and Applications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3653715","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Accurate and automated segmentation of lesions in brain MRI scans is crucial in diagnostics and treatment planning. Despite the significant achievements of existing approaches, they often require substantial computational resources and fail to fully exploit the synergy between low-level and high-level features. To address these challenges, we introduce the Separable Spatial Convolutional Network (SSCN), an innovative model that refines the U-Net architecture to achieve efficient brain tumor segmentation with minimal computational cost. SSCN integrates the PocketNet paradigm and replaces standard convolutions with depthwise separable convolutions, resulting in a significant reduction in parameters and computational load. Additionally, our feature complementary module enhances the interaction between features across the encoder-decoder structure, facilitating the integration of multi-scale features while maintaining low computational demands. The model also incorporates a separable spatial attention mechanism, enhancing its capability to discern spatial details. Empirical validations on standard datasets demonstrate the effectiveness of our proposed model, especially in segmenting small and medium-sized tumors, with only 0.27M parameters and 3.68GFlops. Our code is available at https://github.com/zzpr/SSCN.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用轻量级可分离空间卷积网络进行高效脑肿瘤分割
对脑部磁共振成像扫描中的病变进行准确的自动分割对于诊断和治疗规划至关重要。尽管现有的方法取得了巨大的成就,但它们往往需要大量的计算资源,而且无法充分利用低层次和高层次特征之间的协同作用。为了应对这些挑战,我们引入了可分离空间卷积网络(SSCN),这是一种创新模型,它改进了 U-Net 架构,以最小的计算成本实现了高效的脑肿瘤分割。SSCN 整合了 PocketNet 范式,用深度可分离卷积取代了标准卷积,从而显著降低了参数和计算负荷。此外,我们的特征互补模块增强了整个编码器-解码器结构中特征之间的相互作用,在保持低计算需求的同时促进了多尺度特征的整合。该模型还采用了可分离的空间注意力机制,增强了其辨别空间细节的能力。在标准数据集上进行的经验验证证明了我们提出的模型的有效性,尤其是在分割中小型肿瘤时,参数仅为 0.27M,运算速度为 3.68GFlops。我们的代码见 https://github.com/zzpr/SSCN。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
8.50
自引率
5.90%
发文量
285
审稿时长
7.5 months
期刊介绍: The ACM Transactions on Multimedia Computing, Communications, and Applications is the flagship publication of the ACM Special Interest Group in Multimedia (SIGMM). It is soliciting paper submissions on all aspects of multimedia. Papers on single media (for instance, audio, video, animation) and their processing are also welcome. TOMM is a peer-reviewed, archival journal, available in both print form and digital form. The Journal is published quarterly; with roughly 7 23-page articles in each issue. In addition, all Special Issues are published online-only to ensure a timely publication. The transactions consists primarily of research papers. This is an archival journal and it is intended that the papers will have lasting importance and value over time. In general, papers whose primary focus is on particular multimedia products or the current state of the industry will not be included.
期刊最新文献
TA-Detector: A GNN-based Anomaly Detector via Trust Relationship KF-VTON: Keypoints-Driven Flow Based Virtual Try-On Network Unified View Empirical Study for Large Pretrained Model on Cross-Domain Few-Shot Learning Multimodal Fusion for Talking Face Generation Utilizing Speech-related Facial Action Units Compressed Point Cloud Quality Index by Combining Global Appearance and Local Details
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1