医学图像分析中的浅层和深度学习分类器。

IF 3.7 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING European Radiology Experimental Pub Date : 2024-03-05 DOI:10.1186/s41747-024-00428-2
Francesco Prinzi, Tiziana Currieri, Salvatore Gaglio, Salvatore Vitabile
{"title":"医学图像分析中的浅层和深度学习分类器。","authors":"Francesco Prinzi, Tiziana Currieri, Salvatore Gaglio, Salvatore Vitabile","doi":"10.1186/s41747-024-00428-2","DOIUrl":null,"url":null,"abstract":"<p><p>An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between \"shallow\" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and \"deep\" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.</p>","PeriodicalId":36926,"journal":{"name":"European Radiology Experimental","volume":null,"pages":null},"PeriodicalIF":3.7000,"publicationDate":"2024-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912073/pdf/","citationCount":"0","resultStr":"{\"title\":\"Shallow and deep learning classifiers in medical image analysis.\",\"authors\":\"Francesco Prinzi, Tiziana Currieri, Salvatore Gaglio, Salvatore Vitabile\",\"doi\":\"10.1186/s41747-024-00428-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between \\\"shallow\\\" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and \\\"deep\\\" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.</p>\",\"PeriodicalId\":36926,\"journal\":{\"name\":\"European Radiology Experimental\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-03-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10912073/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Radiology Experimental\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1186/s41747-024-00428-2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Radiology Experimental","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1186/s41747-024-00428-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

人工智能与医学之间的联系日益紧密,使得能够支持医生决策的预测模型得以发展。人工智能所包含的内容远不止机器学习,但机器学习是人工智能在过去十年中被引用和使用最多的分支。由于大多数临床问题都可以通过机器学习分类器来建模,因此有必要讨论其主要内容。本综述旨在就放射学领域最容易获得和广泛使用的分类器提供初级教育见解,区分 "浅层 "学习(即传统机器学习)算法(包括支持向量机、随机森林和 XGBoost)和 "深层 "学习架构(包括卷积神经网络和视觉转换器)。此外,本文还概述了分类器训练的关键步骤,并强调了最常见算法和架构之间的差异。虽然算法的选择取决于所处理的任务和数据集,但本文提出了与任务分析、数据集大小、可解释性要求和可用计算资源有关的分类器选择一般准则。考虑到人们对这些创新模型和架构的巨大兴趣,最后讨论了机器学习算法的可解释性问题,为值得信赖的人工智能提供了一个未来视角。从浅层学习到深度学习,机器学习分类器为医疗保健领域临床决策支持系统的开发提供了至关重要的见解。可解释性是模型的一个关键特征,可引导系统融入临床实践。要点 - 训练浅层分类器需要从感兴趣区域(如放射组学)提取与疾病相关的特征 - 深度分类器可实现自动特征提取和分类 - 分类器的选择基于数据和计算资源的可用性、任务和解释需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

摘要图片

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Shallow and deep learning classifiers in medical image analysis.

An increasingly strong connection between artificial intelligence and medicine has enabled the development of predictive models capable of supporting physicians' decision-making. Artificial intelligence encompasses much more than machine learning, which nevertheless is its most cited and used sub-branch in the last decade. Since most clinical problems can be modeled through machine learning classifiers, it is essential to discuss their main elements. This review aims to give primary educational insights on the most accessible and widely employed classifiers in radiology field, distinguishing between "shallow" learning (i.e., traditional machine learning) algorithms, including support vector machines, random forest and XGBoost, and "deep" learning architectures including convolutional neural networks and vision transformers. In addition, the paper outlines the key steps for classifiers training and highlights the differences between the most common algorithms and architectures. Although the choice of an algorithm depends on the task and dataset dealing with, general guidelines for classifier selection are proposed in relation to task analysis, dataset size, explainability requirements, and available computing resources. Considering the enormous interest in these innovative models and architectures, the problem of machine learning algorithms interpretability is finally discussed, providing a future perspective on trustworthy artificial intelligence.Relevance statement The growing synergy between artificial intelligence and medicine fosters predictive models aiding physicians. Machine learning classifiers, from shallow learning to deep learning, are offering crucial insights for the development of clinical decision support systems in healthcare. Explainability is a key feature of models that leads systems toward integration into clinical practice. Key points • Training a shallow classifier requires extracting disease-related features from region of interests (e.g., radiomics).• Deep classifiers implement automatic feature extraction and classification.• The classifier selection is based on data and computational resources availability, task, and explanation needs.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
European Radiology Experimental
European Radiology Experimental Medicine-Radiology, Nuclear Medicine and Imaging
CiteScore
6.70
自引率
2.60%
发文量
56
审稿时长
18 weeks
期刊最新文献
Dark-field radiography for the detection of bone microstructure changes in osteoporotic human lumbar spine specimens. Amide proton transfer-weighted CEST MRI for radiotherapy target delineation of glioblastoma: a prospective pilot study. Deep learning-based segmentation of kidneys and renal cysts on T2-weighted MRI from patients with autosomal dominant polycystic kidney disease. Probing clarity: AI-generated simplified breast imaging reports for enhanced patient comprehension powered by ChatGPT-4o. Transfer learning classification of suspicious lesions on breast ultrasound: is there room to avoid biopsies of benign lesions?
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1