Advancing breast cancer diagnosis: token vision transformers for faster and accurate classification of histopathology images.

IF 3.2 4区 计算机科学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Visual Computing for Industry Biomedicine and Art Pub Date : 2025-01-08 DOI:10.1186/s42492-024-00181-8
Mouhamed Laid Abimouloud, Khaled Bensid, Mohamed Elleuch, Mohamed Ben Ammar, Monji Kherallah
{"title":"Advancing breast cancer diagnosis: token vision transformers for faster and accurate classification of histopathology images.","authors":"Mouhamed Laid Abimouloud, Khaled Bensid, Mohamed Elleuch, Mohamed Ben Ammar, Monji Kherallah","doi":"10.1186/s42492-024-00181-8","DOIUrl":null,"url":null,"abstract":"<p><p>The vision transformer (ViT) architecture, with its attention mechanism based on multi-head attention layers, has been widely adopted in various computer-aided diagnosis tasks due to its effectiveness in processing medical image information. ViTs are notably recognized for their complex architecture, which requires high-performance GPUs or CPUs for efficient model training and deployment in real-world medical diagnostic devices. This renders them more intricate than convolutional neural networks (CNNs). This difficulty is also challenging in the context of histopathology image analysis, where the images are both limited and complex. In response to these challenges, this study proposes a TokenMixer hybrid-architecture that combines the strengths of CNNs and ViTs. This hybrid architecture aims to enhance feature extraction and classification accuracy with shorter training time and fewer parameters by minimizing the number of input patches employed during training, while incorporating tokenization of input patches using convolutional layers and encoder transformer layers to process patches across all network layers for fast and accurate breast cancer tumor subtype classification. The TokenMixer mechanism is inspired by the ConvMixer and TokenLearner models. First, the ConvMixer model dynamically generates spatial attention maps using convolutional layers, enabling the extraction of patches from input images to minimize the number of input patches used in training. Second, the TokenLearner model extracts relevant regions from the selected input patches, tokenizes them to improve feature extraction, and trains all tokenized patches in an encoder transformer network. We evaluated the TokenMixer model on the BreakHis public dataset, comparing it with ViT-based and other state-of-the-art methods. Our approach achieved impressive results for both binary and multi-classification of breast cancer subtypes across various magnification levels (40×, 100×, 200×, 400×). The model demonstrated accuracies of 97.02% for binary classification and 93.29% for multi-classification, with decision times of 391.71 and 1173.56 s, respectively. These results highlight the potential of our hybrid deep ViT-CNN architecture for advancing tumor classification in histopathological images. The source code is accessible: https://github.com/abimouloud/TokenMixer .</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"1"},"PeriodicalIF":3.2000,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11711433/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Computing for Industry Biomedicine and Art","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1186/s42492-024-00181-8","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

The vision transformer (ViT) architecture, with its attention mechanism based on multi-head attention layers, has been widely adopted in various computer-aided diagnosis tasks due to its effectiveness in processing medical image information. ViTs are notably recognized for their complex architecture, which requires high-performance GPUs or CPUs for efficient model training and deployment in real-world medical diagnostic devices. This renders them more intricate than convolutional neural networks (CNNs). This difficulty is also challenging in the context of histopathology image analysis, where the images are both limited and complex. In response to these challenges, this study proposes a TokenMixer hybrid-architecture that combines the strengths of CNNs and ViTs. This hybrid architecture aims to enhance feature extraction and classification accuracy with shorter training time and fewer parameters by minimizing the number of input patches employed during training, while incorporating tokenization of input patches using convolutional layers and encoder transformer layers to process patches across all network layers for fast and accurate breast cancer tumor subtype classification. The TokenMixer mechanism is inspired by the ConvMixer and TokenLearner models. First, the ConvMixer model dynamically generates spatial attention maps using convolutional layers, enabling the extraction of patches from input images to minimize the number of input patches used in training. Second, the TokenLearner model extracts relevant regions from the selected input patches, tokenizes them to improve feature extraction, and trains all tokenized patches in an encoder transformer network. We evaluated the TokenMixer model on the BreakHis public dataset, comparing it with ViT-based and other state-of-the-art methods. Our approach achieved impressive results for both binary and multi-classification of breast cancer subtypes across various magnification levels (40×, 100×, 200×, 400×). The model demonstrated accuracies of 97.02% for binary classification and 93.29% for multi-classification, with decision times of 391.71 and 1173.56 s, respectively. These results highlight the potential of our hybrid deep ViT-CNN architecture for advancing tumor classification in histopathological images. The source code is accessible: https://github.com/abimouloud/TokenMixer .

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
推进乳腺癌诊断:标记视觉变压器更快,更准确地分类组织病理图像。
视觉转换器(vision transformer, ViT)架构以其基于多头注意层的注意机制,在医学图像信息处理方面的有效性被广泛应用于各种计算机辅助诊断任务中。vit以其复杂的体系结构而闻名,这需要高性能gpu或cpu才能在现实世界的医疗诊断设备中进行有效的模型训练和部署。这使得它们比卷积神经网络(cnn)更复杂。在组织病理学图像分析的背景下,这一困难也是具有挑战性的,因为图像既有限又复杂。为了应对这些挑战,本研究提出了一种结合cnn和ViTs优势的TokenMixer混合架构。该混合架构旨在通过最小化训练过程中使用的输入补丁数量,以更短的训练时间和更少的参数提高特征提取和分类精度,同时结合使用卷积层和编码器变压器层对输入补丁进行标记化,跨所有网络层处理补丁,以实现快速准确的乳腺癌肿瘤亚型分类。TokenMixer机制的灵感来自于ConvMixer和TokenLearner模型。首先,ConvMixer模型使用卷积层动态生成空间注意图,从而能够从输入图像中提取补丁,从而最大限度地减少训练中使用的输入补丁数量。其次,TokenLearner模型从选择的输入patch中提取相关区域,对其进行标记以改进特征提取,并在编码器变压器网络中训练所有标记过的patch。我们在BreakHis公共数据集上评估了TokenMixer模型,并将其与基于viti的方法和其他最先进的方法进行了比较。我们的方法在不同放大倍数(40倍、100倍、200倍、400倍)下对乳腺癌亚型的二元和多重分类都取得了令人印象深刻的结果。该模型对二元分类的准确率为97.02%,对多重分类的准确率为93.29%,决策时间分别为391.71 s和1173.56 s。这些结果突出了我们的混合深度ViT-CNN架构在组织病理学图像中推进肿瘤分类的潜力。源代码可访问:https://github.com/abimouloud/TokenMixer。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
5.60
自引率
0.00%
发文量
0
期刊最新文献
Explainable machine learning framework for cataracts recognition using visual features. Harmonized technical standard test methods for quality evaluation of medical fluorescence endoscopic imaging systems. Advancing breast cancer diagnosis: token vision transformers for faster and accurate classification of histopathology images. Semi-supervised contour-driven broad learning system for autonomous segmentation of concealed prohibited baggage items. Energy consumption forecasting for laser manufacturing of large artifacts based on fusionable transfer learning.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1