首页 > 最新文献

Physical and Engineering Sciences in Medicine最新文献

英文 中文
A full-scale attention-augmented CNN-transformer model for segmentation of oropharyngeal mucosa organs-at-risk in radiotherapy. 用于放疗中口咽粘膜危险器官分割的全尺寸注意力增强CNN-transformer模型。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-09-11 DOI: 10.1007/s13246-025-01614-1
Lian He, Jianda Sun, Shanfu Lu, Jingyang Li, Xiaoqing Wang, Ziye Yan, Jian Guan
<p><p>Radiation-induced oropharyngeal mucositis (ROM) is a common and severe side effect of radiotherapy in nasopharyngeal cancer patients, leading to significant clinical complications such as malnutrition, infections, and treatment interruptions. Accurate delineation of the oropharyngeal mucosa (OPM) as an organ-at-risk (OAR) is crucial to minimizing radiation exposure and preventing ROM. This study aims to develop and validate an advanced automatic segmentation model, attention-augmented Swin U-Net transformer (AA-Swin UNETR), for accurate delineation of OPM to improve radiotherapy planning and reduce the incidence of ROM. We proposed a hybrid CNN-transformer model, AA-Swin UNETR, based on the Swin UNETR framework, which integrates hierarchical feature extraction with full-scale attention mechanisms. The model includes a Swin Transformer-based encoder and a CNN-based decoder with residual blocks, connected via a full-scale feature connection scheme. The full-scale attention mechanism enables the model to capture long-range dependencies and multi-level features effectively, enhancing the segmentation accuracy. The model was trained on a dataset of 202 CT scans from Nanfang Hospital, using expert manual delineations as the gold standard. We evaluated the performance of AA-Swin UNETR against state-of-the-art (SOTA) segmentation models, including Swin UNETR, nnUNet, and 3D UX-Net, using geometric and dosimetric evaluation parameters. The geometric metrics include Dice similarity coefficient (DSC), surface DSC (sDSC), volume similarity (VS), Hausdorff distance (HD), precision, and recall. The dosimetric metrics include changes of D<sub>0.1 cc</sub> and D<sub>mean</sub> between results derived from manually delineated OPM and auto-segmentation models. The AA-Swin UNETR model achieved the highest mean DSC of 87.72 ± 1.98%, significantly outperforming Swin UNETR (83.53 ± 2.59%), nnUNet (85.48%± 2.68), and 3D UX-Net (80.04 ± 3.76%). The model also showed superior mean sDSC (98.44 ± 1.08%), mean VS (97.86 ± 1.43%), mean precision (87.60 ± 3.06%) and mean recall (89.22 ± 2.70%), with a competitive mean HD of 9.03 ± 2.79 mm. For dosimetric evaluation, the proposed model generates smallest mean [Formula: see text] (0.46 ± 4.92 cGy) and mean [Formula: see text] (6.26 ± 24.90 cGY) relative to manual delineation compared with other auto-segmentation results (mean [Formula: see text] of Swin UNETR = -0.56 ± 7.28 cGy, nnUNet = 0.99 ± 4.73 cGy, 3D UX-Net = -0.65 ± 8.05 cGy; mean [Formula: see text] of Swin UNETR = 7.46 ± 43.37, nnUNet = 21.76 ± 37.86 and 3D UX-Net = 44.61 ± 62.33). In this paper, we proposed a transformer and CNN hybrid deep-learning based model AA-Swin UNETR for automatic segmentation of OPM as an OAR structure in radiotherapy planning. Evaluations with geometric and dosimetric parameters demonstrated AA-Swin UNETR can generate delineations close to a manual reference, both in terms of geometry and dose-volume metrics. The proposed model out-pe
辐射诱发口咽黏膜炎(ROM)是鼻咽癌放疗患者常见且严重的副作用,可导致严重的临床并发症,如营养不良、感染和治疗中断。准确描绘口咽粘膜(OPM)作为危险器官(OAR)对于减少辐射暴露和预防ROM至关重要。本研究旨在开发和验证一种先进的自动分割模型,即注意力增强Swin U-Net变压器(AA-Swin UNETR),用于准确描绘OPM,以改善放疗计划并降低ROM的发生率。我们提出了一种基于Swin UNETR框架的混合CNN-transformer模型AA-Swin UNETR。它将分层特征提取与全尺度注意机制相结合。该模型包括一个基于Swin变压器的编码器和一个基于cnn的残差块解码器,通过全尺寸特征连接方案连接。全尺度注意机制使模型能够有效地捕捉远程依赖关系和多层次特征,提高了分割精度。该模型在南方医院的202个CT扫描数据集上进行训练,使用专家手动划定作为金标准。我们使用几何和剂量学评估参数,对最先进的(SOTA)分割模型(包括Swin UNETR, nnUNet和3D UX-Net)进行了AA-Swin UNETR的性能评估。几何指标包括Dice similarity coefficient (DSC)、surface DSC (sDSC)、volume similarity (VS)、Hausdorff distance (HD)、precision(精密度)和recall(召回率)。剂量学指标包括人工划定的OPM和自动分割模型得出的结果之间D0.1 cc和Dmean的变化。AA-Swin UNETR模型的平均DSC最高,为87.72±1.98%,显著优于Swin UNETR(83.53±2.59%)、nnUNet(85.48%±2.68)和3D UX-Net(80.04±3.76%)。平均sDSC(98.44±1.08%)、平均VS(97.86±1.43%)、平均精密度(87.60±3.06%)和平均召回率(89.22±2.70%)均优于模型,平均高清(HD)为9.03±2.79 mm。最小剂量测定的评价,该模型生成的意思是[公式:看到文本](0.46±4.92 cGy),意思是[公式:看到文本](6.26±24.90 cGy)相对于手动描述与其他auto-segmentation相比结果(意味着[公式:看到文本]斯温UNETR = -0.56±7.28 cGy nnUNet cGy = 0.99±4.73,3 d UX-Net = -0.65±8.05 cGy;意思是[公式:看到文本]斯温UNETR = 7.46±43.37,nnUNet = 21.76±37.86和3 d UX-Net = 44.61±62.33)。本文提出了一种基于transformer和CNN混合深度学习的模型AA-Swin UNETR,用于OPM的自动分割,作为放疗规划中的桨结构。利用几何和剂量学参数进行的评价表明,在几何和剂量-体积指标方面,AA-Swin UNETR可以产生接近人工参考的圈定。所提出的模型在两个评估指标上都优于现有的SOTA模型,并证明了其准确分割OPM复杂解剖结构的能力,为加强放疗计划提供了可靠的工具。
{"title":"A full-scale attention-augmented CNN-transformer model for segmentation of oropharyngeal mucosa organs-at-risk in radiotherapy.","authors":"Lian He, Jianda Sun, Shanfu Lu, Jingyang Li, Xiaoqing Wang, Ziye Yan, Jian Guan","doi":"10.1007/s13246-025-01614-1","DOIUrl":"10.1007/s13246-025-01614-1","url":null,"abstract":"&lt;p&gt;&lt;p&gt;Radiation-induced oropharyngeal mucositis (ROM) is a common and severe side effect of radiotherapy in nasopharyngeal cancer patients, leading to significant clinical complications such as malnutrition, infections, and treatment interruptions. Accurate delineation of the oropharyngeal mucosa (OPM) as an organ-at-risk (OAR) is crucial to minimizing radiation exposure and preventing ROM. This study aims to develop and validate an advanced automatic segmentation model, attention-augmented Swin U-Net transformer (AA-Swin UNETR), for accurate delineation of OPM to improve radiotherapy planning and reduce the incidence of ROM. We proposed a hybrid CNN-transformer model, AA-Swin UNETR, based on the Swin UNETR framework, which integrates hierarchical feature extraction with full-scale attention mechanisms. The model includes a Swin Transformer-based encoder and a CNN-based decoder with residual blocks, connected via a full-scale feature connection scheme. The full-scale attention mechanism enables the model to capture long-range dependencies and multi-level features effectively, enhancing the segmentation accuracy. The model was trained on a dataset of 202 CT scans from Nanfang Hospital, using expert manual delineations as the gold standard. We evaluated the performance of AA-Swin UNETR against state-of-the-art (SOTA) segmentation models, including Swin UNETR, nnUNet, and 3D UX-Net, using geometric and dosimetric evaluation parameters. The geometric metrics include Dice similarity coefficient (DSC), surface DSC (sDSC), volume similarity (VS), Hausdorff distance (HD), precision, and recall. The dosimetric metrics include changes of D&lt;sub&gt;0.1 cc&lt;/sub&gt; and D&lt;sub&gt;mean&lt;/sub&gt; between results derived from manually delineated OPM and auto-segmentation models. The AA-Swin UNETR model achieved the highest mean DSC of 87.72 ± 1.98%, significantly outperforming Swin UNETR (83.53 ± 2.59%), nnUNet (85.48%± 2.68), and 3D UX-Net (80.04 ± 3.76%). The model also showed superior mean sDSC (98.44 ± 1.08%), mean VS (97.86 ± 1.43%), mean precision (87.60 ± 3.06%) and mean recall (89.22 ± 2.70%), with a competitive mean HD of 9.03 ± 2.79 mm. For dosimetric evaluation, the proposed model generates smallest mean [Formula: see text] (0.46 ± 4.92 cGy) and mean [Formula: see text] (6.26 ± 24.90 cGY) relative to manual delineation compared with other auto-segmentation results (mean [Formula: see text] of Swin UNETR = -0.56 ± 7.28 cGy, nnUNet = 0.99 ± 4.73 cGy, 3D UX-Net = -0.65 ± 8.05 cGy; mean [Formula: see text] of Swin UNETR = 7.46 ± 43.37, nnUNet = 21.76 ± 37.86 and 3D UX-Net = 44.61 ± 62.33). In this paper, we proposed a transformer and CNN hybrid deep-learning based model AA-Swin UNETR for automatic segmentation of OPM as an OAR structure in radiotherapy planning. Evaluations with geometric and dosimetric parameters demonstrated AA-Swin UNETR can generate delineations close to a manual reference, both in terms of geometry and dose-volume metrics. The proposed model out-pe","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1703-1714"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of differences in computed tomography value-electron density/physical density conversion tables on calculate dose in low-density areas. 低密度地区计算机断层扫描值-电子密度/物理密度转换表差异对计算剂量的影响。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-07-23 DOI: 10.1007/s13246-025-01611-4
Mia Nomura, Shunsuke Goto, Mizuki Yoshioka, Yuiko Kato, Ayaka Tsunoda, Kunio Nishioka, Yoshinori Tanabe

In radiotherapy treatment planning, the extrapolation of computed tomography (CT) values for low-density areas without known materials may differ between CT scanners, resulting in different calculated doses. We evaluated the differences in the percentage depth dose (PDD) calculated using eight CT scanners. Heterogeneous virtual phantoms were created using LN-300 lung and - 900 HU. For the two types of virtual phantoms, the PDD on the central axis was calculated using five energies, two irradiation field sizes, and two calculation algorithms (the anisotropic analytical algorithm and Acuros XB). For the LN-300 lung, the maximum CT value difference between the eight CT scanners was 51 HU for an electron density (ED) of 0.29 and 8.8 HU for an extrapolated ED of 0.05. The LN-300 lung CT values showed little variation in the CT-ED/physical density data among CT scanners. The difference in the point depth for the PDD in the LN-300 lung between the CT scanners was < 0.5% for all energies and calculation algorithms. Using Acuros XB, the PDD at - 900 HU had a maximum difference between facilities of > 5%, and the dose difference corresponding to an LN-300 lung CT value difference of > 20 HU was > 1% at a field size of 2 × 2 cm2. The study findings suggest that the calculated dose of low-density regions without known materials in the CT-ED conversion table introduces a risk of dose differences between facilities because of the calibration of the CT values, even when the same CT-ED phantom radiation treatment planning and treatment devices are used.

在放射治疗计划中,不同的CT扫描仪对未知物质的低密度区域的计算机断层扫描(CT)值的外推可能不同,从而导致不同的计算剂量。我们评估了使用8台CT扫描仪计算的百分比深度剂量(PDD)的差异。采用LN-300肺和- 900 HU制作异质虚拟幻象。对于两种类型的虚拟幻影,采用五种能量、两种辐照场大小和两种计算算法(各向异性解析算法和acros XB)计算中轴上的PDD。对于LN-300肺,8台CT扫描仪在电子密度(ED)为0.29时的最大CT值差为51 HU,而外推ED为0.05时的最大CT值差为8.8 HU。LN-300肺CT值显示不同CT扫描仪的CT- ed /物理密度数据差异不大。LN-300肺部PDD的点深在CT扫描仪之间的差异为5%,在场大小为2 × 2 cm2的情况下,LN-300肺部CT值差bb0 20 HU对应的剂量差为> 1%。研究结果表明,在CT- ed转换表中没有已知材料的低密度区域,即使使用相同的CT- ed虚辐射治疗计划和治疗设备,由于CT值的校准,计算出的剂量在设施之间存在剂量差异的风险。
{"title":"Impact of differences in computed tomography value-electron density/physical density conversion tables on calculate dose in low-density areas.","authors":"Mia Nomura, Shunsuke Goto, Mizuki Yoshioka, Yuiko Kato, Ayaka Tsunoda, Kunio Nishioka, Yoshinori Tanabe","doi":"10.1007/s13246-025-01611-4","DOIUrl":"10.1007/s13246-025-01611-4","url":null,"abstract":"<p><p>In radiotherapy treatment planning, the extrapolation of computed tomography (CT) values for low-density areas without known materials may differ between CT scanners, resulting in different calculated doses. We evaluated the differences in the percentage depth dose (PDD) calculated using eight CT scanners. Heterogeneous virtual phantoms were created using LN-300 lung and - 900 HU. For the two types of virtual phantoms, the PDD on the central axis was calculated using five energies, two irradiation field sizes, and two calculation algorithms (the anisotropic analytical algorithm and Acuros XB). For the LN-300 lung, the maximum CT value difference between the eight CT scanners was 51 HU for an electron density (ED) of 0.29 and 8.8 HU for an extrapolated ED of 0.05. The LN-300 lung CT values showed little variation in the CT-ED/physical density data among CT scanners. The difference in the point depth for the PDD in the LN-300 lung between the CT scanners was < 0.5% for all energies and calculation algorithms. Using Acuros XB, the PDD at - 900 HU had a maximum difference between facilities of > 5%, and the dose difference corresponding to an LN-300 lung CT value difference of > 20 HU was > 1% at a field size of 2 × 2 cm<sup>2</sup>. The study findings suggest that the calculated dose of low-density regions without known materials in the CT-ED conversion table introduces a risk of dose differences between facilities because of the calibration of the CT values, even when the same CT-ED phantom radiation treatment planning and treatment devices are used.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1679-1689"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738602/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144692087","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of image processing and analysis of computed tomography images using deep learning methods. 使用深度学习方法的图像处理和计算机断层扫描图像分析综述。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-09-03 DOI: 10.1007/s13246-025-01635-w
Darcie Anderson, Prabhakar Ramachandran, Jamie Trapp, Andrew Fielding

The use of machine learning has seen extraordinary growth since the development of deep learning techniques, notably the deep artificial neural network. Deep learning methodology excels in addressing complicated problems such as image classification, object detection, and natural language processing. A key feature of these networks is the capability to extract useful patterns from vast quantities of complex data, including images. As many branches of healthcare revolves around the generation, processing, and analysis of images, these techniques have become increasingly commonplace. This is especially true for radiotherapy, which relies on the use of anatomical and functional images from a range of imaging modalities, such as Computed Tomography (CT). The aim of this review is to provide an understanding of deep learning methodologies, including neural network types and structure, as well as linking these general concepts to medical CT image processing for radiotherapy. Specifically, it focusses on the stages of enhancement and analysis, incorporating image denoising, super-resolution, generation, registration, and segmentation, supported by examples of recent literature.

自从深度学习技术,特别是深度人工神经网络的发展以来,机器学习的使用已经有了惊人的增长。深度学习方法擅长解决复杂的问题,如图像分类、目标检测和自然语言处理。这些网络的一个关键特征是从包括图像在内的大量复杂数据中提取有用模式的能力。由于医疗保健的许多分支都围绕图像的生成、处理和分析展开,这些技术变得越来越普遍。放射治疗尤其如此,它依赖于使用来自一系列成像方式的解剖和功能图像,例如计算机断层扫描(CT)。这篇综述的目的是提供对深度学习方法的理解,包括神经网络的类型和结构,以及将这些一般概念与放射治疗的医学CT图像处理联系起来。具体来说,它侧重于增强和分析的阶段,包括图像去噪、超分辨率、生成、配准和分割,并以最近的文献为例提供支持。
{"title":"A review of image processing and analysis of computed tomography images using deep learning methods.","authors":"Darcie Anderson, Prabhakar Ramachandran, Jamie Trapp, Andrew Fielding","doi":"10.1007/s13246-025-01635-w","DOIUrl":"10.1007/s13246-025-01635-w","url":null,"abstract":"<p><p>The use of machine learning has seen extraordinary growth since the development of deep learning techniques, notably the deep artificial neural network. Deep learning methodology excels in addressing complicated problems such as image classification, object detection, and natural language processing. A key feature of these networks is the capability to extract useful patterns from vast quantities of complex data, including images. As many branches of healthcare revolves around the generation, processing, and analysis of images, these techniques have become increasingly commonplace. This is especially true for radiotherapy, which relies on the use of anatomical and functional images from a range of imaging modalities, such as Computed Tomography (CT). The aim of this review is to provide an understanding of deep learning methodologies, including neural network types and structure, as well as linking these general concepts to medical CT image processing for radiotherapy. Specifically, it focusses on the stages of enhancement and analysis, incorporating image denoising, super-resolution, generation, registration, and segmentation, supported by examples of recent literature.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1491-1523"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738611/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144993950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable hierarchical machine-learning approaches for multimodal prediction of conversion from mild cognitive impairment to Alzheimer's disease. 从轻度认知障碍到阿尔茨海默病转换的多模态预测的可解释的分层机器学习方法。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-08-11 DOI: 10.1007/s13246-025-01618-x
Soheil Zarei, Mohsen Saffar, Reza Shalbaf, Peyman Hassani Abharian, Ahmad Shalbaf

Alzheimer's disease (AD) is a neurodegenerative disorder that challenges early diagnosis and intervention, yet the black-box nature of many predictive models limits clinical adoption. In this study, we developed an advanced machine learning (ML) framework that integrates hierarchical feature selection with multiple classifiers to predict progression from mild cognitive impairment (MCI) to AD. Using baseline data from 580 participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI), categorized into stable MCI (sMCI) and progressive MCI (pMCI) subgroups, we analyzed features both individually and across seven key groups. The neuropsychological test group exhibited the highest predictive power, with several of the top individual predictors drawn from this domain. Hierarchical feature selection combining initial statistical filtering and machine learning based refinement, narrowed the feature set to the eight most informative variables. To demystify model decisions, we applied SHAP-based (SHapley Additive exPlanations) explainability analysis, quantifying each feature's contribution to conversion risk. The explainable random forest classifier, optimized on these selected features, achieved 83.79% accuracy (84.93% sensitivity, 83.32% specificity), outperforming other methods and revealing hippocampal volume, delayed memory recall (LDELTOTAL), and Functional Activities Questionnaire (FAQ) scores as the top drivers of conversion. These results underscore the effectiveness of combining diverse data sources with advanced ML models, and demonstrate that transparent, SHAP-driven insights align with known AD biomarkers, transforming our model from a predictive black box into a clinically actionable tool for early diagnosis and patient stratification.

阿尔茨海默病(AD)是一种神经退行性疾病,对早期诊断和干预具有挑战性,但许多预测模型的黑箱性质限制了临床应用。在这项研究中,我们开发了一个先进的机器学习(ML)框架,该框架将分层特征选择与多个分类器集成在一起,以预测从轻度认知障碍(MCI)到AD的进展。使用来自580名阿尔茨海默病神经影像学倡议(ADNI)参与者的基线数据,将其分为稳定型MCI (sMCI)和进行性MCI (pMCI)亚组,我们分析了个体和七个关键组的特征。神经心理测试组表现出最高的预测能力,有几个最重要的个体预测来自这个领域。分层特征选择结合初始统计过滤和基于机器学习的细化,将特征集缩小到8个信息量最大的变量。为了揭开模型决策的神秘面纱,我们应用了基于shap (SHapley可加解释)的可解释性分析,量化每个特征对转换风险的贡献。基于这些特征进行优化的可解释随机森林分类器准确率达到83.79%(灵敏度84.93%,特异性83.32%),优于其他方法,并显示海马体积、延迟记忆回忆(LDELTOTAL)和功能活动问卷(FAQ)得分是转换的主要驱动因素。这些结果强调了将不同数据源与先进的ML模型相结合的有效性,并证明了透明的、shap驱动的见解与已知的AD生物标志物相一致,将我们的模型从预测黑箱转变为临床可操作的工具,用于早期诊断和患者分层。
{"title":"Explainable hierarchical machine-learning approaches for multimodal prediction of conversion from mild cognitive impairment to Alzheimer's disease.","authors":"Soheil Zarei, Mohsen Saffar, Reza Shalbaf, Peyman Hassani Abharian, Ahmad Shalbaf","doi":"10.1007/s13246-025-01618-x","DOIUrl":"10.1007/s13246-025-01618-x","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurodegenerative disorder that challenges early diagnosis and intervention, yet the black-box nature of many predictive models limits clinical adoption. In this study, we developed an advanced machine learning (ML) framework that integrates hierarchical feature selection with multiple classifiers to predict progression from mild cognitive impairment (MCI) to AD. Using baseline data from 580 participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI), categorized into stable MCI (sMCI) and progressive MCI (pMCI) subgroups, we analyzed features both individually and across seven key groups. The neuropsychological test group exhibited the highest predictive power, with several of the top individual predictors drawn from this domain. Hierarchical feature selection combining initial statistical filtering and machine learning based refinement, narrowed the feature set to the eight most informative variables. To demystify model decisions, we applied SHAP-based (SHapley Additive exPlanations) explainability analysis, quantifying each feature's contribution to conversion risk. The explainable random forest classifier, optimized on these selected features, achieved 83.79% accuracy (84.93% sensitivity, 83.32% specificity), outperforming other methods and revealing hippocampal volume, delayed memory recall (LDELTOTAL), and Functional Activities Questionnaire (FAQ) scores as the top drivers of conversion. These results underscore the effectiveness of combining diverse data sources with advanced ML models, and demonstrate that transparent, SHAP-driven insights align with known AD biomarkers, transforming our model from a predictive black box into a clinically actionable tool for early diagnosis and patient stratification.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1741-1759"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prop scan versus roll scan: selection for cranial three-dimensional rotational angiography using in-house phantom and Figure of Merit as parameter. 支柱扫描与滚动扫描:颅内三维旋转血管造影的选择,使用内部幻影和优点图作为参数。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-09-10 DOI: 10.1007/s13246-025-01632-z
Ika Hariyati, Ani Sulistyani, Matthew Gregorius, Harimulti Aribowo, Ungguh Prawoto, Defri Dwi Yana, Thariqah Salamah, Lukmanda Evan Lubis, Djarwani Soeharso Soejoko

This study introduces a novel optimization framework for cranial three-dimensional rotational angiography (3DRA), combining the development of a brain equivalent in-house phantom with Figure of Merit (FOM) a quantitative evaluation method. The technical contribution involves the development of an in-house phantom constructed using iodine-infused epoxy and lycal resins, validated against clinical Hounsfield Units (HU). A customized head phantom was developed to simulate brain tissue and cranial vasculature for 3DRA optimization. The phantom was constructed using epoxy resin with 0.15-0.2% iodine to replicate brain tissue and lycal resin with iodine concentrations ranging from 0.65 to 0.7% to simulate blood vessels of varying diameters. The phantom materials validation was performed by comparing their HU values to clinical reference HU values from brain tissue and cranial vessels, ensuring accurate tissue simulation. The validated phantom was used to acquire images using cranial 3DRA protocols, specifically Prop-Scan and Roll-Scan. Image quality was assessed using Signal-Difference-to-Noise Ratio (SDNR), Dose-Area Product (DAP), and Modulation Transfer Function (MTF). Imaging efficiency was quantified using the Figure of Merit (FOM), calculated as SDNR2/DAP, to objectively compare the performance of two cranial 3DRA protocols. The task-based optimization showed that Roll-Scan consistently outperformed Prop-Scan across all vessel sizes and regions. Roll-Scan yields FOM values ranging from 183 to 337, while Prop-Scan FOM values ranged from 96 to 189. Additionally, Roll-Scan (0.27 lp/pixel) delivered better spatial resolution, as indicated by higher MTF 10% value than Prop-Scan (0.23 lp/pixel). Most notably, Roll-Scan consistently detecting 2 mm vessel structures among all regions of the phantom. This capability is clinically important in cerebral angiography, which is accurate visualization of small vessels, i.e. the Anterior Cerebral Artery (ACA), Posterior Cerebral Artery (PCA), and Middle Cerebral Artery (MCA). These findings highlight Roll-Scan as the superior protocol for brain interventional imaging, underscoring the significance of FOM as a comprehensive parameter for optimizing imaging protocols in clinical practice. The experimental results support the use of the Roll-Scan protocol as the preferred acquisition method for cerebral angiography in clinical practice. The analysis using FOM provides substantial and quantifiable evidence in determining the acquisition methods. Furthermore, the customized in-house phantom is recommended as a candidate to optimization tools for clinical medical physicists.

本研究介绍了一种新的颅三维旋转血管造影(3DRA)优化框架,将脑等效内部幻像的开发与优点图(FOM)的定量评估方法相结合。技术贡献包括使用碘注入环氧树脂和local树脂构建内部模体,并通过临床Hounsfield单位(HU)进行验证。开发了一个定制的头部幻影来模拟脑组织和颅血管系统,以进行3DRA优化。用含碘量为0.15-0.2%的环氧树脂来模拟脑组织,用含碘量为0.65 - 0.7%的局部树脂来模拟不同直径的血管。通过将虚拟材料的HU值与临床参考脑组织和颅血管的HU值进行比较,以确保准确的组织模拟。通过颅3DRA协议,特别是Prop-Scan和Roll-Scan,使用验证过的假体获取图像。使用信噪比(SDNR)、剂量面积积(DAP)和调制传递函数(MTF)评估图像质量。成像效率采用优点图(FOM)量化,计算为SDNR2/DAP,客观比较两种颅3DRA方案的性能。基于任务的优化表明,在所有船舶尺寸和区域,Roll-Scan的性能始终优于Prop-Scan。Roll-Scan的FOM值范围从183到337,而Prop-Scan的FOM值范围从96到189。此外,Roll-Scan (0.27 lp/像素)提供了更好的空间分辨率,MTF值比Prop-Scan (0.23 lp/像素)高10%。最值得注意的是,Roll-Scan在幻体的所有区域中都能持续检测到2mm的血管结构。这种能力在脑血管造影中具有重要的临床意义,它可以准确地显示小血管,即大脑前动脉(ACA)、大脑后动脉(PCA)和大脑中动脉(MCA)。这些发现强调了Roll-Scan作为脑介入成像的优越方案,强调了FOM作为优化临床实践中成像方案的综合参数的重要性。实验结果支持在临床实践中使用Roll-Scan协议作为脑血管造影的首选采集方法。使用FOM的分析为确定获取方法提供了大量和可量化的证据。此外,定制的内部幻影被推荐为临床医学物理学家优化工具的候选。
{"title":"Prop scan versus roll scan: selection for cranial three-dimensional rotational angiography using in-house phantom and Figure of Merit as parameter.","authors":"Ika Hariyati, Ani Sulistyani, Matthew Gregorius, Harimulti Aribowo, Ungguh Prawoto, Defri Dwi Yana, Thariqah Salamah, Lukmanda Evan Lubis, Djarwani Soeharso Soejoko","doi":"10.1007/s13246-025-01632-z","DOIUrl":"10.1007/s13246-025-01632-z","url":null,"abstract":"<p><p>This study introduces a novel optimization framework for cranial three-dimensional rotational angiography (3DRA), combining the development of a brain equivalent in-house phantom with Figure of Merit (FOM) a quantitative evaluation method. The technical contribution involves the development of an in-house phantom constructed using iodine-infused epoxy and lycal resins, validated against clinical Hounsfield Units (HU). A customized head phantom was developed to simulate brain tissue and cranial vasculature for 3DRA optimization. The phantom was constructed using epoxy resin with 0.15-0.2% iodine to replicate brain tissue and lycal resin with iodine concentrations ranging from 0.65 to 0.7% to simulate blood vessels of varying diameters. The phantom materials validation was performed by comparing their HU values to clinical reference HU values from brain tissue and cranial vessels, ensuring accurate tissue simulation. The validated phantom was used to acquire images using cranial 3DRA protocols, specifically Prop-Scan and Roll-Scan. Image quality was assessed using Signal-Difference-to-Noise Ratio (SDNR), Dose-Area Product (DAP), and Modulation Transfer Function (MTF). Imaging efficiency was quantified using the Figure of Merit (FOM), calculated as SDNR<sup>2</sup>/DAP, to objectively compare the performance of two cranial 3DRA protocols. The task-based optimization showed that Roll-Scan consistently outperformed Prop-Scan across all vessel sizes and regions. Roll-Scan yields FOM values ranging from 183 to 337, while Prop-Scan FOM values ranged from 96 to 189. Additionally, Roll-Scan (0.27 lp/pixel) delivered better spatial resolution, as indicated by higher MTF 10% value than Prop-Scan (0.23 lp/pixel). Most notably, Roll-Scan consistently detecting 2 mm vessel structures among all regions of the phantom. This capability is clinically important in cerebral angiography, which is accurate visualization of small vessels, i.e. the Anterior Cerebral Artery (ACA), Posterior Cerebral Artery (PCA), and Middle Cerebral Artery (MCA). These findings highlight Roll-Scan as the superior protocol for brain interventional imaging, underscoring the significance of FOM as a comprehensive parameter for optimizing imaging protocols in clinical practice. The experimental results support the use of the Roll-Scan protocol as the preferred acquisition method for cerebral angiography in clinical practice. The analysis using FOM provides substantial and quantifiable evidence in determining the acquisition methods. Furthermore, the customized in-house phantom is recommended as a candidate to optimization tools for clinical medical physicists.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1935-1947"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced detection of ovarian cancer using AI-optimized 3D CNNs for PET/CT scan analysis. 利用ai优化的3D cnn增强卵巢癌的PET/CT扫描分析。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-08-04 DOI: 10.1007/s13246-025-01615-0
Mohammad Hossein Sadeghi, Sedigheh Sina, Reza Faghihi, Mehrosadat Alavi, Francesco Giammarile, Hamid Omidi

This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [18F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.

本研究探讨了深度学习(DL)如何利用大型成像数据集增强卵巢癌的诊断和分期。具体来说,我们将六种传统的卷积神经网络(CNN)架构——resnet、DenseNet、GoogLeNet、U-Net、VGG和alexnet与OCDA-Net进行了比较,OCDA-Net是一种为[18F]FDG PET图像分析设计的增强模型。OCDA-Net是ResNet架构的一个进步,使用随机分割的训练(80%)、验证(10%)和测试(10%)图像数据集进行了彻底的比较。OCDA-Net训练了超过100个epoch,在强大的精度、召回率和F-measure指标的支持下,OCDA-Net的诊断分类准确率达到92%,分期结果达到94%。Grad-CAM ++热图证实该网络关注高代谢病变,支持临床可解释性。我们的研究结果表明,OCDA-Net优于现有的CNN模型,具有很大的潜力来改变卵巢癌的诊断和分期。该研究表明,在临床实践中实施这些DL模型最终可以改善患者预后。未来的研究应该扩展数据集,增强模型的可解释性,并在临床环境中验证这些模型。
{"title":"Enhanced detection of ovarian cancer using AI-optimized 3D CNNs for PET/CT scan analysis.","authors":"Mohammad Hossein Sadeghi, Sedigheh Sina, Reza Faghihi, Mehrosadat Alavi, Francesco Giammarile, Hamid Omidi","doi":"10.1007/s13246-025-01615-0","DOIUrl":"10.1007/s13246-025-01615-0","url":null,"abstract":"<p><p>This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [<sup>18</sup>F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"2087-2102"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clinical evaluation of motion robust reconstruction using deep learning in lung CT. 基于深度学习的肺部CT运动鲁棒重建的临床评价。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-09-10 DOI: 10.1007/s13246-025-01633-y
Shiho Kuwajima, Daisuke Oura

In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.

在肺部CT成像中,由心脏运动和呼吸引起的运动伪影是常见的。最近,一种基于深度学习的、应用运动校正技术的重建方法CLEAR Motion被开发出来。本研究旨在定量评估CLEAR Motion的临床应用价值。共分析129例肺CT,并从病历中获取所有患者的心率、身高、体重和BMI。对有无CLEAR运动的图像进行重构,并利用拉普拉斯方差(VL)和PSNR进行定量评价。两种重建方法之间的VL (DVL)差异用于评估肺野的哪个部分(上、中、下)CLEAR Motion有效。为了根据患者的特点评估运动矫正的效果,我们确定了身体质量指数(BMI)、心率和DVL之间的相关性。运动伪影的视觉评估由9名放射技术人员进行配对比较。除一例外,在CLEAR Motion中VL更高。几乎所有病例(110例)均表现为下肢大DVL。BMI与DVL呈正相关(r = 0.55, p
{"title":"Clinical evaluation of motion robust reconstruction using deep learning in lung CT.","authors":"Shiho Kuwajima, Daisuke Oura","doi":"10.1007/s13246-025-01633-y","DOIUrl":"10.1007/s13246-025-01633-y","url":null,"abstract":"<p><p>In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1949-1954"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparison of two bolus types for radiotherapy following immediate breast reconstruction. 乳房重建后两种剂量放疗的比较。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-07-28 DOI: 10.1007/s13246-025-01604-3
Kasia Bobrowski, Jonathon Lee

Immediate breast Reconstruction is increasing in use in Australia and accounts for almost 10% of breast cancer patients (Roder in Breast 22:1220-1225, 2013). Many treatments include a bolus to increase dose to the skin surface. Air gaps under bolus increase uncertainty in dosimetry and many bolus types are unable to conform to the shape of the breast or are not flexible throughout treatment if there is a swelling induced contour change. This study investigates the use of two bolus types that can be manufactured in house-wet combine and ThermoBolus. Wet combine is a material composed of several water soaked dressings. ThermoBolus is a product developed in-house that consists of thermoplastic encased in silicone. Plans using a volumetric arc therapy technique were created for each bolus and dosimetry performed with thermoluminescent detectors (TLDs) and EBT-3 film over three fractions. Wax was used to simulate swelling and allow analysis of the flexibility of the bolus materials. ThermoBolus had a range of agreement with calculation from -2 to 4% for film measurement and -5.6 to 1.0% for TLDs. Wet combine had a range of agreement with calculation from 1.6 to 10.5% for film measurement and -13.5 to 13.1% for TLDs. It showed consistent conformity and flexibility for all fractions and with induced contour but air gaps of 2-3 mm were observed between layers of the material. ThermoBolus and wet combine are able to conform to contour change without the introduction of large air gaps between the patient surface and bolus. ThermoBolus is reusable and can be remoulded if the patient undergoes significant contour change during the course of treatment. It is able to be modelled accurately by the treatment planning system. Wet combine shows inconsistency in manufacture and requires more than one bolus to be made over the course of treatment, reducing accuracy in modelling and dosimetry.

在澳大利亚,立即乳房重建的使用越来越多,几乎占乳腺癌患者的10% (Roder in breast 22:20 -1225, 2013)。许多治疗方法包括给皮肤表面注射一剂以增加剂量。丸下的气隙增加了剂量测定的不确定性,如果有肿胀引起的轮廓改变,许多丸类型不能符合乳房的形状或在整个治疗过程中不灵活。本研究探讨了两种可在室内湿式联合收割机和ThermoBolus中生产的丸剂的使用情况。湿式混合料是由几种水浸泡过的敷料组成的材料。ThermoBolus是一种内部开发的产品,由硅树脂包裹的热塑性塑料组成。使用体积弧治疗技术为每个丸创建计划,并使用热释光探测器(TLDs)和EBT-3薄膜对三个组分进行剂量测定。蜡被用来模拟膨胀,并允许分析弹丸材料的柔韧性。ThermoBolus的计算范围与薄膜测量的-2至4%一致,与tld的- 5.6%至1.0%一致。湿式联合收割机的计算结果与薄膜测量结果的一致性范围为1.6 ~ 10.5%,与tld测量结果的一致性范围为-13.5 ~ 13.1%。它表现出一致的一致性和柔韧性,所有部分和诱导轮廓,但在材料层之间观察到2-3毫米的气隙。ThermoBolus和wet组合能够符合轮廓变化,而不会在患者表面和丸之间引入大的气隙。ThermoBolus是可重复使用的,如果患者在治疗过程中经历了显著的轮廓变化,可以重新塑造。它可以通过治疗计划系统精确地建模。湿联合剂在生产过程中表现出不一致性,并且在治疗过程中需要多次注射,从而降低了建模和剂量测定的准确性。
{"title":"A comparison of two bolus types for radiotherapy following immediate breast reconstruction.","authors":"Kasia Bobrowski, Jonathon Lee","doi":"10.1007/s13246-025-01604-3","DOIUrl":"10.1007/s13246-025-01604-3","url":null,"abstract":"<p><p>Immediate breast Reconstruction is increasing in use in Australia and accounts for almost 10% of breast cancer patients (Roder in Breast 22:1220-1225, 2013). Many treatments include a bolus to increase dose to the skin surface. Air gaps under bolus increase uncertainty in dosimetry and many bolus types are unable to conform to the shape of the breast or are not flexible throughout treatment if there is a swelling induced contour change. This study investigates the use of two bolus types that can be manufactured in house-wet combine and ThermoBolus. Wet combine is a material composed of several water soaked dressings. ThermoBolus is a product developed in-house that consists of thermoplastic encased in silicone. Plans using a volumetric arc therapy technique were created for each bolus and dosimetry performed with thermoluminescent detectors (TLDs) and EBT-3 film over three fractions. Wax was used to simulate swelling and allow analysis of the flexibility of the bolus materials. ThermoBolus had a range of agreement with calculation from -2 to 4% for film measurement and -5.6 to 1.0% for TLDs. Wet combine had a range of agreement with calculation from 1.6 to 10.5% for film measurement and -13.5 to 13.1% for TLDs. It showed consistent conformity and flexibility for all fractions and with induced contour but air gaps of 2-3 mm were observed between layers of the material. ThermoBolus and wet combine are able to conform to contour change without the introduction of large air gaps between the patient surface and bolus. ThermoBolus is reusable and can be remoulded if the patient undergoes significant contour change during the course of treatment. It is able to be modelled accurately by the treatment planning system. Wet combine shows inconsistency in manufacture and requires more than one bolus to be made over the course of treatment, reducing accuracy in modelling and dosimetry.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1601-1609"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A computational eye state classification model using EEG signal based on data mining techniques: comparative analysis. 基于数据挖掘技术的脑电信号计算眼状态分类模型:比较分析。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-08-04 DOI: 10.1007/s13246-025-01619-w
Subhash Mondal, Amitava Nag

Artificial Intelligence has shown great promise in healthcare, particularly in non-invasive diagnostics using bio signals. This study focuses on classifying eye states (open or closed) using Electroencephalogram (EEG) signals captured via a 14-electrode neuroheadset, recorded through a Brain-Computer Interface (BCI). A publicly available dataset comprising 14,980 instances was used, where each sample represents EEG signals corresponding to eye activity. Fourteen classical machine learning (ML) models were evaluated using a tenfold cross-validation approach. The preprocessing pipeline involved removing outliers using the Z-score method, addressing class imbalance with SMOTETomek, and applying a bandpass filter to reduce signal noise. Significant EEG features were selected using a two-sample independent t-test (p < 0.05), ensuring only statistically relevant electrodes were retained. Additionally, the Common Spatial Pattern (CSP) method was used for feature extraction to enhance class separability by maximizing variance differences between eye states. Experimental results demonstrate that several classifiers achieved strong performance, with accuracy above 90%. The k-Nearest Neighbours classifier yielded the highest accuracy of 97.92% with CSP, and 97.75% without CSP. The application of CSP also enhanced the performance of Multi-Layer Perceptron and Support Vector Machine, reaching accuracies of 95.30% and 93.93%, respectively. The results affirm that integrating statistical validation, signal processing, and ML techniques can enable accurate and efficient EEG-based eye state classification, with practical implications for real-time BCI systems and offering a lightweight solution for real-time healthcare wearable applications healthcare applications.

人工智能在医疗保健领域显示出巨大的前景,特别是在利用生物信号进行非侵入性诊断方面。本研究的重点是通过脑机接口(BCI)记录的14电极神经耳机捕获的脑电图(EEG)信号对眼睛状态(打开或关闭)进行分类。使用了包含14,980个实例的公开数据集,其中每个样本代表与眼活动相对应的脑电图信号。使用十倍交叉验证方法评估14个经典机器学习(ML)模型。预处理流程包括使用Z-score方法去除异常值,使用SMOTETomek解决类不平衡问题,并应用带通滤波器来降低信号噪声。采用两样本独立t检验(p
{"title":"A computational eye state classification model using EEG signal based on data mining techniques: comparative analysis.","authors":"Subhash Mondal, Amitava Nag","doi":"10.1007/s13246-025-01619-w","DOIUrl":"10.1007/s13246-025-01619-w","url":null,"abstract":"<p><p>Artificial Intelligence has shown great promise in healthcare, particularly in non-invasive diagnostics using bio signals. This study focuses on classifying eye states (open or closed) using Electroencephalogram (EEG) signals captured via a 14-electrode neuroheadset, recorded through a Brain-Computer Interface (BCI). A publicly available dataset comprising 14,980 instances was used, where each sample represents EEG signals corresponding to eye activity. Fourteen classical machine learning (ML) models were evaluated using a tenfold cross-validation approach. The preprocessing pipeline involved removing outliers using the Z-score method, addressing class imbalance with SMOTETomek, and applying a bandpass filter to reduce signal noise. Significant EEG features were selected using a two-sample independent t-test (p < 0.05), ensuring only statistically relevant electrodes were retained. Additionally, the Common Spatial Pattern (CSP) method was used for feature extraction to enhance class separability by maximizing variance differences between eye states. Experimental results demonstrate that several classifiers achieved strong performance, with accuracy above 90%. The k-Nearest Neighbours classifier yielded the highest accuracy of 97.92% with CSP, and 97.75% without CSP. The application of CSP also enhanced the performance of Multi-Layer Perceptron and Support Vector Machine, reaching accuracies of 95.30% and 93.93%, respectively. The results affirm that integrating statistical validation, signal processing, and ML techniques can enable accurate and efficient EEG-based eye state classification, with practical implications for real-time BCI systems and offering a lightweight solution for real-time healthcare wearable applications healthcare applications.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1761-1774"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A non-contact blood pressure measurement method based on face video. 一种基于人脸视频的非接触式血压测量方法。
IF 2 4区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2025-12-01 Epub Date: 2025-10-20 DOI: 10.1007/s13246-025-01645-8
Lifeng Yang, Shaojie Gu, Binbin Liu, Junjie Wang, Junwei Cheng, Yuanxi Zhang, Zhengan Xia, Yan Yang

Blood pressure is an essential indicator of cardiovascular health in the human body, and regular and accurate blood pressure measurement is essential for preventing cardiovascular diseases. The emergence of photoplethysmography (PPG) and the advancement of machine learning offers new opportunities for noninvasive blood pressure measurement. This paper proposes a non-contact method for measuring blood pressure using face video and machine learning. This method extracts facial remote photoplethysmography (RPPG) signals from face video captured by a camera, and enhances the signal quality of RPPG through a set of filtering processes. The blood pressure regression model is constructed using the extreme gradient boosting tree (XGBoost) method to estimate blood pressure from RPPG signals. This approach achieved accurate blood pressure measurement, with a measurement error of 4.8893 ± 6.6237 mmHg for systolic pressure and 4.0805 ± 5.5821 mmHg for diastolic pressure. Experimental results show that this method fully complies with the American Medical Instrumentation Association (AAMI).Our proposed method has minor errors in predicting the systolic and diastolic blood pressures and achieves grade A evaluation for both systolic and diastolic blood pressures according to the British Hypertension Society (BHS) standards.

血压是人体心血管健康的重要指标,定期准确测量血压对预防心血管疾病至关重要。光容积脉搏波(PPG)的出现和机器学习的进步为无创血压测量提供了新的机会。本文提出了一种使用人脸视频和机器学习的非接触式血压测量方法。该方法从摄像机采集的人脸视频中提取人脸远程光体积脉搏波信号,并通过一系列滤波处理提高信号质量。采用极限梯度提升树(XGBoost)方法构建血压回归模型,从RPPG信号中估计血压。该方法实现了准确的血压测量,收缩压测量误差为4.8893±6.6237 mmHg,舒张压测量误差为4.0805±5.5821 mmHg。实验结果表明,该方法完全符合美国医疗器械协会(AAMI)的要求。我们提出的方法在预测收缩压和舒张压方面误差较小,根据英国高血压协会(BHS)的标准,收缩压和舒张压均达到A级评价。
{"title":"A non-contact blood pressure measurement method based on face video.","authors":"Lifeng Yang, Shaojie Gu, Binbin Liu, Junjie Wang, Junwei Cheng, Yuanxi Zhang, Zhengan Xia, Yan Yang","doi":"10.1007/s13246-025-01645-8","DOIUrl":"10.1007/s13246-025-01645-8","url":null,"abstract":"<p><p>Blood pressure is an essential indicator of cardiovascular health in the human body, and regular and accurate blood pressure measurement is essential for preventing cardiovascular diseases. The emergence of photoplethysmography (PPG) and the advancement of machine learning offers new opportunities for noninvasive blood pressure measurement. This paper proposes a non-contact method for measuring blood pressure using face video and machine learning. This method extracts facial remote photoplethysmography (RPPG) signals from face video captured by a camera, and enhances the signal quality of RPPG through a set of filtering processes. The blood pressure regression model is constructed using the extreme gradient boosting tree (XGBoost) method to estimate blood pressure from RPPG signals. This approach achieved accurate blood pressure measurement, with a measurement error of 4.8893 ± 6.6237 mmHg for systolic pressure and 4.0805 ± 5.5821 mmHg for diastolic pressure. Experimental results show that this method fully complies with the American Medical Instrumentation Association (AAMI).Our proposed method has minor errors in predicting the systolic and diastolic blood pressures and achieves grade A evaluation for both systolic and diastolic blood pressures according to the British Hypertension Society (BHS) standards.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"2059-2067"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Physical and Engineering Sciences in Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1