Alzheimer's disease (AD) is a neurodegenerative disorder that challenges early diagnosis and intervention, yet the black-box nature of many predictive models limits clinical adoption. In this study, we developed an advanced machine learning (ML) framework that integrates hierarchical feature selection with multiple classifiers to predict progression from mild cognitive impairment (MCI) to AD. Using baseline data from 580 participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI), categorized into stable MCI (sMCI) and progressive MCI (pMCI) subgroups, we analyzed features both individually and across seven key groups. The neuropsychological test group exhibited the highest predictive power, with several of the top individual predictors drawn from this domain. Hierarchical feature selection combining initial statistical filtering and machine learning based refinement, narrowed the feature set to the eight most informative variables. To demystify model decisions, we applied SHAP-based (SHapley Additive exPlanations) explainability analysis, quantifying each feature's contribution to conversion risk. The explainable random forest classifier, optimized on these selected features, achieved 83.79% accuracy (84.93% sensitivity, 83.32% specificity), outperforming other methods and revealing hippocampal volume, delayed memory recall (LDELTOTAL), and Functional Activities Questionnaire (FAQ) scores as the top drivers of conversion. These results underscore the effectiveness of combining diverse data sources with advanced ML models, and demonstrate that transparent, SHAP-driven insights align with known AD biomarkers, transforming our model from a predictive black box into a clinically actionable tool for early diagnosis and patient stratification.
{"title":"Explainable hierarchical machine-learning approaches for multimodal prediction of conversion from mild cognitive impairment to Alzheimer's disease.","authors":"Soheil Zarei, Mohsen Saffar, Reza Shalbaf, Peyman Hassani Abharian, Ahmad Shalbaf","doi":"10.1007/s13246-025-01618-x","DOIUrl":"10.1007/s13246-025-01618-x","url":null,"abstract":"<p><p>Alzheimer's disease (AD) is a neurodegenerative disorder that challenges early diagnosis and intervention, yet the black-box nature of many predictive models limits clinical adoption. In this study, we developed an advanced machine learning (ML) framework that integrates hierarchical feature selection with multiple classifiers to predict progression from mild cognitive impairment (MCI) to AD. Using baseline data from 580 participants in the Alzheimer's Disease Neuroimaging Initiative (ADNI), categorized into stable MCI (sMCI) and progressive MCI (pMCI) subgroups, we analyzed features both individually and across seven key groups. The neuropsychological test group exhibited the highest predictive power, with several of the top individual predictors drawn from this domain. Hierarchical feature selection combining initial statistical filtering and machine learning based refinement, narrowed the feature set to the eight most informative variables. To demystify model decisions, we applied SHAP-based (SHapley Additive exPlanations) explainability analysis, quantifying each feature's contribution to conversion risk. The explainable random forest classifier, optimized on these selected features, achieved 83.79% accuracy (84.93% sensitivity, 83.32% specificity), outperforming other methods and revealing hippocampal volume, delayed memory recall (LDELTOTAL), and Functional Activities Questionnaire (FAQ) scores as the top drivers of conversion. These results underscore the effectiveness of combining diverse data sources with advanced ML models, and demonstrate that transparent, SHAP-driven insights align with known AD biomarkers, transforming our model from a predictive black box into a clinically actionable tool for early diagnosis and patient stratification.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1741-1759"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144817988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study introduces a novel optimization framework for cranial three-dimensional rotational angiography (3DRA), combining the development of a brain equivalent in-house phantom with Figure of Merit (FOM) a quantitative evaluation method. The technical contribution involves the development of an in-house phantom constructed using iodine-infused epoxy and lycal resins, validated against clinical Hounsfield Units (HU). A customized head phantom was developed to simulate brain tissue and cranial vasculature for 3DRA optimization. The phantom was constructed using epoxy resin with 0.15-0.2% iodine to replicate brain tissue and lycal resin with iodine concentrations ranging from 0.65 to 0.7% to simulate blood vessels of varying diameters. The phantom materials validation was performed by comparing their HU values to clinical reference HU values from brain tissue and cranial vessels, ensuring accurate tissue simulation. The validated phantom was used to acquire images using cranial 3DRA protocols, specifically Prop-Scan and Roll-Scan. Image quality was assessed using Signal-Difference-to-Noise Ratio (SDNR), Dose-Area Product (DAP), and Modulation Transfer Function (MTF). Imaging efficiency was quantified using the Figure of Merit (FOM), calculated as SDNR2/DAP, to objectively compare the performance of two cranial 3DRA protocols. The task-based optimization showed that Roll-Scan consistently outperformed Prop-Scan across all vessel sizes and regions. Roll-Scan yields FOM values ranging from 183 to 337, while Prop-Scan FOM values ranged from 96 to 189. Additionally, Roll-Scan (0.27 lp/pixel) delivered better spatial resolution, as indicated by higher MTF 10% value than Prop-Scan (0.23 lp/pixel). Most notably, Roll-Scan consistently detecting 2 mm vessel structures among all regions of the phantom. This capability is clinically important in cerebral angiography, which is accurate visualization of small vessels, i.e. the Anterior Cerebral Artery (ACA), Posterior Cerebral Artery (PCA), and Middle Cerebral Artery (MCA). These findings highlight Roll-Scan as the superior protocol for brain interventional imaging, underscoring the significance of FOM as a comprehensive parameter for optimizing imaging protocols in clinical practice. The experimental results support the use of the Roll-Scan protocol as the preferred acquisition method for cerebral angiography in clinical practice. The analysis using FOM provides substantial and quantifiable evidence in determining the acquisition methods. Furthermore, the customized in-house phantom is recommended as a candidate to optimization tools for clinical medical physicists.
{"title":"Prop scan versus roll scan: selection for cranial three-dimensional rotational angiography using in-house phantom and Figure of Merit as parameter.","authors":"Ika Hariyati, Ani Sulistyani, Matthew Gregorius, Harimulti Aribowo, Ungguh Prawoto, Defri Dwi Yana, Thariqah Salamah, Lukmanda Evan Lubis, Djarwani Soeharso Soejoko","doi":"10.1007/s13246-025-01632-z","DOIUrl":"10.1007/s13246-025-01632-z","url":null,"abstract":"<p><p>This study introduces a novel optimization framework for cranial three-dimensional rotational angiography (3DRA), combining the development of a brain equivalent in-house phantom with Figure of Merit (FOM) a quantitative evaluation method. The technical contribution involves the development of an in-house phantom constructed using iodine-infused epoxy and lycal resins, validated against clinical Hounsfield Units (HU). A customized head phantom was developed to simulate brain tissue and cranial vasculature for 3DRA optimization. The phantom was constructed using epoxy resin with 0.15-0.2% iodine to replicate brain tissue and lycal resin with iodine concentrations ranging from 0.65 to 0.7% to simulate blood vessels of varying diameters. The phantom materials validation was performed by comparing their HU values to clinical reference HU values from brain tissue and cranial vessels, ensuring accurate tissue simulation. The validated phantom was used to acquire images using cranial 3DRA protocols, specifically Prop-Scan and Roll-Scan. Image quality was assessed using Signal-Difference-to-Noise Ratio (SDNR), Dose-Area Product (DAP), and Modulation Transfer Function (MTF). Imaging efficiency was quantified using the Figure of Merit (FOM), calculated as SDNR<sup>2</sup>/DAP, to objectively compare the performance of two cranial 3DRA protocols. The task-based optimization showed that Roll-Scan consistently outperformed Prop-Scan across all vessel sizes and regions. Roll-Scan yields FOM values ranging from 183 to 337, while Prop-Scan FOM values ranged from 96 to 189. Additionally, Roll-Scan (0.27 lp/pixel) delivered better spatial resolution, as indicated by higher MTF 10% value than Prop-Scan (0.23 lp/pixel). Most notably, Roll-Scan consistently detecting 2 mm vessel structures among all regions of the phantom. This capability is clinically important in cerebral angiography, which is accurate visualization of small vessels, i.e. the Anterior Cerebral Artery (ACA), Posterior Cerebral Artery (PCA), and Middle Cerebral Artery (MCA). These findings highlight Roll-Scan as the superior protocol for brain interventional imaging, underscoring the significance of FOM as a comprehensive parameter for optimizing imaging protocols in clinical practice. The experimental results support the use of the Roll-Scan protocol as the preferred acquisition method for cerebral angiography in clinical practice. The analysis using FOM provides substantial and quantifiable evidence in determining the acquisition methods. Furthermore, the customized in-house phantom is recommended as a candidate to optimization tools for clinical medical physicists.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1935-1947"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-04DOI: 10.1007/s13246-025-01615-0
Mohammad Hossein Sadeghi, Sedigheh Sina, Reza Faghihi, Mehrosadat Alavi, Francesco Giammarile, Hamid Omidi
This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [18F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.
{"title":"Enhanced detection of ovarian cancer using AI-optimized 3D CNNs for PET/CT scan analysis.","authors":"Mohammad Hossein Sadeghi, Sedigheh Sina, Reza Faghihi, Mehrosadat Alavi, Francesco Giammarile, Hamid Omidi","doi":"10.1007/s13246-025-01615-0","DOIUrl":"10.1007/s13246-025-01615-0","url":null,"abstract":"<p><p>This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [<sup>18</sup>F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"2087-2102"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-10DOI: 10.1007/s13246-025-01633-y
Shiho Kuwajima, Daisuke Oura
In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.
在肺部CT成像中,由心脏运动和呼吸引起的运动伪影是常见的。最近,一种基于深度学习的、应用运动校正技术的重建方法CLEAR Motion被开发出来。本研究旨在定量评估CLEAR Motion的临床应用价值。共分析129例肺CT,并从病历中获取所有患者的心率、身高、体重和BMI。对有无CLEAR运动的图像进行重构,并利用拉普拉斯方差(VL)和PSNR进行定量评价。两种重建方法之间的VL (DVL)差异用于评估肺野的哪个部分(上、中、下)CLEAR Motion有效。为了根据患者的特点评估运动矫正的效果,我们确定了身体质量指数(BMI)、心率和DVL之间的相关性。运动伪影的视觉评估由9名放射技术人员进行配对比较。除一例外,在CLEAR Motion中VL更高。几乎所有病例(110例)均表现为下肢大DVL。BMI与DVL呈正相关(r = 0.55, p
{"title":"Clinical evaluation of motion robust reconstruction using deep learning in lung CT.","authors":"Shiho Kuwajima, Daisuke Oura","doi":"10.1007/s13246-025-01633-y","DOIUrl":"10.1007/s13246-025-01633-y","url":null,"abstract":"<p><p>In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1949-1954"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145030968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-28DOI: 10.1007/s13246-025-01604-3
Kasia Bobrowski, Jonathon Lee
Immediate breast Reconstruction is increasing in use in Australia and accounts for almost 10% of breast cancer patients (Roder in Breast 22:1220-1225, 2013). Many treatments include a bolus to increase dose to the skin surface. Air gaps under bolus increase uncertainty in dosimetry and many bolus types are unable to conform to the shape of the breast or are not flexible throughout treatment if there is a swelling induced contour change. This study investigates the use of two bolus types that can be manufactured in house-wet combine and ThermoBolus. Wet combine is a material composed of several water soaked dressings. ThermoBolus is a product developed in-house that consists of thermoplastic encased in silicone. Plans using a volumetric arc therapy technique were created for each bolus and dosimetry performed with thermoluminescent detectors (TLDs) and EBT-3 film over three fractions. Wax was used to simulate swelling and allow analysis of the flexibility of the bolus materials. ThermoBolus had a range of agreement with calculation from -2 to 4% for film measurement and -5.6 to 1.0% for TLDs. Wet combine had a range of agreement with calculation from 1.6 to 10.5% for film measurement and -13.5 to 13.1% for TLDs. It showed consistent conformity and flexibility for all fractions and with induced contour but air gaps of 2-3 mm were observed between layers of the material. ThermoBolus and wet combine are able to conform to contour change without the introduction of large air gaps between the patient surface and bolus. ThermoBolus is reusable and can be remoulded if the patient undergoes significant contour change during the course of treatment. It is able to be modelled accurately by the treatment planning system. Wet combine shows inconsistency in manufacture and requires more than one bolus to be made over the course of treatment, reducing accuracy in modelling and dosimetry.
在澳大利亚,立即乳房重建的使用越来越多,几乎占乳腺癌患者的10% (Roder in breast 22:20 -1225, 2013)。许多治疗方法包括给皮肤表面注射一剂以增加剂量。丸下的气隙增加了剂量测定的不确定性,如果有肿胀引起的轮廓改变,许多丸类型不能符合乳房的形状或在整个治疗过程中不灵活。本研究探讨了两种可在室内湿式联合收割机和ThermoBolus中生产的丸剂的使用情况。湿式混合料是由几种水浸泡过的敷料组成的材料。ThermoBolus是一种内部开发的产品,由硅树脂包裹的热塑性塑料组成。使用体积弧治疗技术为每个丸创建计划,并使用热释光探测器(TLDs)和EBT-3薄膜对三个组分进行剂量测定。蜡被用来模拟膨胀,并允许分析弹丸材料的柔韧性。ThermoBolus的计算范围与薄膜测量的-2至4%一致,与tld的- 5.6%至1.0%一致。湿式联合收割机的计算结果与薄膜测量结果的一致性范围为1.6 ~ 10.5%,与tld测量结果的一致性范围为-13.5 ~ 13.1%。它表现出一致的一致性和柔韧性,所有部分和诱导轮廓,但在材料层之间观察到2-3毫米的气隙。ThermoBolus和wet组合能够符合轮廓变化,而不会在患者表面和丸之间引入大的气隙。ThermoBolus是可重复使用的,如果患者在治疗过程中经历了显著的轮廓变化,可以重新塑造。它可以通过治疗计划系统精确地建模。湿联合剂在生产过程中表现出不一致性,并且在治疗过程中需要多次注射,从而降低了建模和剂量测定的准确性。
{"title":"A comparison of two bolus types for radiotherapy following immediate breast reconstruction.","authors":"Kasia Bobrowski, Jonathon Lee","doi":"10.1007/s13246-025-01604-3","DOIUrl":"10.1007/s13246-025-01604-3","url":null,"abstract":"<p><p>Immediate breast Reconstruction is increasing in use in Australia and accounts for almost 10% of breast cancer patients (Roder in Breast 22:1220-1225, 2013). Many treatments include a bolus to increase dose to the skin surface. Air gaps under bolus increase uncertainty in dosimetry and many bolus types are unable to conform to the shape of the breast or are not flexible throughout treatment if there is a swelling induced contour change. This study investigates the use of two bolus types that can be manufactured in house-wet combine and ThermoBolus. Wet combine is a material composed of several water soaked dressings. ThermoBolus is a product developed in-house that consists of thermoplastic encased in silicone. Plans using a volumetric arc therapy technique were created for each bolus and dosimetry performed with thermoluminescent detectors (TLDs) and EBT-3 film over three fractions. Wax was used to simulate swelling and allow analysis of the flexibility of the bolus materials. ThermoBolus had a range of agreement with calculation from -2 to 4% for film measurement and -5.6 to 1.0% for TLDs. Wet combine had a range of agreement with calculation from 1.6 to 10.5% for film measurement and -13.5 to 13.1% for TLDs. It showed consistent conformity and flexibility for all fractions and with induced contour but air gaps of 2-3 mm were observed between layers of the material. ThermoBolus and wet combine are able to conform to contour change without the introduction of large air gaps between the patient surface and bolus. ThermoBolus is reusable and can be remoulded if the patient undergoes significant contour change during the course of treatment. It is able to be modelled accurately by the treatment planning system. Wet combine shows inconsistency in manufacture and requires more than one bolus to be made over the course of treatment, reducing accuracy in modelling and dosimetry.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1601-1609"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144734066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-04DOI: 10.1007/s13246-025-01619-w
Subhash Mondal, Amitava Nag
Artificial Intelligence has shown great promise in healthcare, particularly in non-invasive diagnostics using bio signals. This study focuses on classifying eye states (open or closed) using Electroencephalogram (EEG) signals captured via a 14-electrode neuroheadset, recorded through a Brain-Computer Interface (BCI). A publicly available dataset comprising 14,980 instances was used, where each sample represents EEG signals corresponding to eye activity. Fourteen classical machine learning (ML) models were evaluated using a tenfold cross-validation approach. The preprocessing pipeline involved removing outliers using the Z-score method, addressing class imbalance with SMOTETomek, and applying a bandpass filter to reduce signal noise. Significant EEG features were selected using a two-sample independent t-test (p < 0.05), ensuring only statistically relevant electrodes were retained. Additionally, the Common Spatial Pattern (CSP) method was used for feature extraction to enhance class separability by maximizing variance differences between eye states. Experimental results demonstrate that several classifiers achieved strong performance, with accuracy above 90%. The k-Nearest Neighbours classifier yielded the highest accuracy of 97.92% with CSP, and 97.75% without CSP. The application of CSP also enhanced the performance of Multi-Layer Perceptron and Support Vector Machine, reaching accuracies of 95.30% and 93.93%, respectively. The results affirm that integrating statistical validation, signal processing, and ML techniques can enable accurate and efficient EEG-based eye state classification, with practical implications for real-time BCI systems and offering a lightweight solution for real-time healthcare wearable applications healthcare applications.
{"title":"A computational eye state classification model using EEG signal based on data mining techniques: comparative analysis.","authors":"Subhash Mondal, Amitava Nag","doi":"10.1007/s13246-025-01619-w","DOIUrl":"10.1007/s13246-025-01619-w","url":null,"abstract":"<p><p>Artificial Intelligence has shown great promise in healthcare, particularly in non-invasive diagnostics using bio signals. This study focuses on classifying eye states (open or closed) using Electroencephalogram (EEG) signals captured via a 14-electrode neuroheadset, recorded through a Brain-Computer Interface (BCI). A publicly available dataset comprising 14,980 instances was used, where each sample represents EEG signals corresponding to eye activity. Fourteen classical machine learning (ML) models were evaluated using a tenfold cross-validation approach. The preprocessing pipeline involved removing outliers using the Z-score method, addressing class imbalance with SMOTETomek, and applying a bandpass filter to reduce signal noise. Significant EEG features were selected using a two-sample independent t-test (p < 0.05), ensuring only statistically relevant electrodes were retained. Additionally, the Common Spatial Pattern (CSP) method was used for feature extraction to enhance class separability by maximizing variance differences between eye states. Experimental results demonstrate that several classifiers achieved strong performance, with accuracy above 90%. The k-Nearest Neighbours classifier yielded the highest accuracy of 97.92% with CSP, and 97.75% without CSP. The application of CSP also enhanced the performance of Multi-Layer Perceptron and Support Vector Machine, reaching accuracies of 95.30% and 93.93%, respectively. The results affirm that integrating statistical validation, signal processing, and ML techniques can enable accurate and efficient EEG-based eye state classification, with practical implications for real-time BCI systems and offering a lightweight solution for real-time healthcare wearable applications healthcare applications.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1761-1774"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-20DOI: 10.1007/s13246-025-01645-8
Lifeng Yang, Shaojie Gu, Binbin Liu, Junjie Wang, Junwei Cheng, Yuanxi Zhang, Zhengan Xia, Yan Yang
Blood pressure is an essential indicator of cardiovascular health in the human body, and regular and accurate blood pressure measurement is essential for preventing cardiovascular diseases. The emergence of photoplethysmography (PPG) and the advancement of machine learning offers new opportunities for noninvasive blood pressure measurement. This paper proposes a non-contact method for measuring blood pressure using face video and machine learning. This method extracts facial remote photoplethysmography (RPPG) signals from face video captured by a camera, and enhances the signal quality of RPPG through a set of filtering processes. The blood pressure regression model is constructed using the extreme gradient boosting tree (XGBoost) method to estimate blood pressure from RPPG signals. This approach achieved accurate blood pressure measurement, with a measurement error of 4.8893 ± 6.6237 mmHg for systolic pressure and 4.0805 ± 5.5821 mmHg for diastolic pressure. Experimental results show that this method fully complies with the American Medical Instrumentation Association (AAMI).Our proposed method has minor errors in predicting the systolic and diastolic blood pressures and achieves grade A evaluation for both systolic and diastolic blood pressures according to the British Hypertension Society (BHS) standards.
{"title":"A non-contact blood pressure measurement method based on face video.","authors":"Lifeng Yang, Shaojie Gu, Binbin Liu, Junjie Wang, Junwei Cheng, Yuanxi Zhang, Zhengan Xia, Yan Yang","doi":"10.1007/s13246-025-01645-8","DOIUrl":"10.1007/s13246-025-01645-8","url":null,"abstract":"<p><p>Blood pressure is an essential indicator of cardiovascular health in the human body, and regular and accurate blood pressure measurement is essential for preventing cardiovascular diseases. The emergence of photoplethysmography (PPG) and the advancement of machine learning offers new opportunities for noninvasive blood pressure measurement. This paper proposes a non-contact method for measuring blood pressure using face video and machine learning. This method extracts facial remote photoplethysmography (RPPG) signals from face video captured by a camera, and enhances the signal quality of RPPG through a set of filtering processes. The blood pressure regression model is constructed using the extreme gradient boosting tree (XGBoost) method to estimate blood pressure from RPPG signals. This approach achieved accurate blood pressure measurement, with a measurement error of 4.8893 ± 6.6237 mmHg for systolic pressure and 4.0805 ± 5.5821 mmHg for diastolic pressure. Experimental results show that this method fully complies with the American Medical Instrumentation Association (AAMI).Our proposed method has minor errors in predicting the systolic and diastolic blood pressures and achieves grade A evaluation for both systolic and diastolic blood pressures according to the British Hypertension Society (BHS) standards.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"2059-2067"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145330455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-09-23DOI: 10.1007/s13246-025-01626-x
Manas K Nag, Anup K Sadhu, Samiran Das, Chandan Kumar, Sandeep Choudhary
Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.
从非对比CT (NCCT)扫描中分割缺血性脑卒中病变是一项复杂的任务,因为与周围健康脑组织相比,这些病变的强度较低,而且在许多情况下,它们与侧脑室的强度相同。在NCCT中识别早期急性缺血性脑卒中病变仍然特别具有挑战性。计算机辅助检测和分割可以作为有价值的工具来支持临床医生在脑卒中诊断。本文介绍了CoAt U SegNet,这是一种新的深度学习模型,旨在从NCCT扫描中检测和分割急性缺血性脑卒中病变。与传统的3D分割模型不同,本研究提出了一种先进的3D深度学习方法来提高描绘精度。传统的机器学习模型很难达到令人满意的分割性能,这凸显了对更复杂技术的需求。对于模型训练,使用了50次NCCT扫描,其中10次扫描用于验证,500次扫描用于测试。编码器卷积块结合了1、3和5的膨胀率,有效地捕获了多尺度特征。对500次未见过的NCCT扫描的性能评估结果显示,Dice相似度得分为75%,Jaccard指数为70%,显示了分割精度的显着提高。采用增强的相似指数来细化病灶分割,这可以进一步帮助区分半暗区和核心梗死区,有助于改善临床决策。
{"title":"3D CoAt U SegNet-enhanced deep learning framework for accurate segmentation of acute ischemic stroke lesions from non-contrast CT scans.","authors":"Manas K Nag, Anup K Sadhu, Samiran Das, Chandan Kumar, Sandeep Choudhary","doi":"10.1007/s13246-025-01626-x","DOIUrl":"10.1007/s13246-025-01626-x","url":null,"abstract":"<p><p>Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1853-1863"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-21DOI: 10.1007/s13246-025-01602-5
Luis Muñoz, Peter McLoone, Peter Metcalfe, Anatoly B Rosenfeld, Giordano Biasi
This study assesses the updated Monaco TPS virtual source model (VSM) 2.0, which removes multileaf collimator (MLC) and jaw characterization as editable factors from the MLC geometry section within Monaco. The focus is on the impact of changes to stereotactic radiotherapy (SRT) cases for spinal and intracranial treatments for two beam matched linear accelerators. A validated custom VSM 1.6 model optimized for SRT was compared with the Elekta Accelerated Go Live 6 MV flattening filter-free (FFF) and VSM 2.0. Evaluations included measured MLC characteristics with a high-resolution detector, measured output factors (OPF), ion chamber fields in the thorax phantom, and recalculations of clinically relevant SRT cases. VSM 2.0 improves MLC modelling. Ion chamber measurements for IAEA TD1583 measurements were found to be within expected tolerances. Gamma pass rates for two matched LINACs evidenced improvement at 1%, 1 mm and 10% threshold for single and multi-SRS brain and SABR Spine treatments. VSM 2.0 represents a meaningful advancement in beam modelling within a Monte Carlo-based TPS environment, offering improved dosimetric performance and operational simplicity. Commercially available detectors were used to demonstrate that VSM 2.0 enhances agility MLC modelling, supporting more precise SRT and SABR delivery for matched LINACs. Removing configurable dependencies from the beam model will result in more consistent high quality beam models, an improves workflows for commissioning of the Monaco TPS.
本研究评估了更新的摩纳哥TPS虚拟源模型(VSM) 2.0,该模型从摩纳哥的MLC几何部分中删除了多叶准直器(MLC)和下颌特征作为可编辑因素。重点是改变立体定向放疗(SRT)的情况下,脊柱和颅内治疗的两个束匹配线性加速器的影响。针对SRT优化的定制VSM 1.6模型与Elekta Accelerated Go Live 6 MV平坦化无滤波器(FFF)和VSM 2.0进行了比较。评估包括用高分辨率检测器测量的MLC特征、测量的输出因子(OPF)、胸腔幻象中的离子室场,以及临床相关SRT病例的重新计算。VSM 2.0改进了MLC建模。原子能机构TD1583测量的离子室测量结果在预期的公差范围内。两个匹配的LINACs的伽玛通过率在单和多srs脑和SABR脊柱治疗的1%、1mm和10%阈值下得到改善。VSM 2.0代表了在蒙特卡洛TPS环境中光束建模的有意义的进步,提供了改进的剂量学性能和操作简单性。商用检测器用于证明VSM 2.0增强了MLC建模的灵活性,支持更精确的SRT和SABR交付匹配的LINACs。从光束模型中去除可配置的依赖项将产生更一致的高质量光束模型,并改善摩纳哥TPS调试的工作流程。
{"title":"Evaluating Monaco 6.2.2 in complex radiotherapy across matched LINACs: improved MLC modelling and dose accuracy with virtual source model 2.0.","authors":"Luis Muñoz, Peter McLoone, Peter Metcalfe, Anatoly B Rosenfeld, Giordano Biasi","doi":"10.1007/s13246-025-01602-5","DOIUrl":"10.1007/s13246-025-01602-5","url":null,"abstract":"<p><p>This study assesses the updated Monaco TPS virtual source model (VSM) 2.0, which removes multileaf collimator (MLC) and jaw characterization as editable factors from the MLC geometry section within Monaco. The focus is on the impact of changes to stereotactic radiotherapy (SRT) cases for spinal and intracranial treatments for two beam matched linear accelerators. A validated custom VSM 1.6 model optimized for SRT was compared with the Elekta Accelerated Go Live 6 MV flattening filter-free (FFF) and VSM 2.0. Evaluations included measured MLC characteristics with a high-resolution detector, measured output factors (OPF), ion chamber fields in the thorax phantom, and recalculations of clinically relevant SRT cases. VSM 2.0 improves MLC modelling. Ion chamber measurements for IAEA TD1583 measurements were found to be within expected tolerances. Gamma pass rates for two matched LINACs evidenced improvement at 1%, 1 mm and 10% threshold for single and multi-SRS brain and SABR Spine treatments. VSM 2.0 represents a meaningful advancement in beam modelling within a Monte Carlo-based TPS environment, offering improved dosimetric performance and operational simplicity. Commercially available detectors were used to demonstrate that VSM 2.0 enhances agility MLC modelling, supporting more precise SRT and SABR delivery for matched LINACs. Removing configurable dependencies from the beam model will result in more consistent high quality beam models, an improves workflows for commissioning of the Monaco TPS.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1573-1588"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738645/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144676209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-21DOI: 10.1007/s13246-025-01630-1
Andrew Chacon, Sylvia Gong, Artur Cichocki, Talia Enright, Harris Panopoulos, Nathan Sonnberger, Andrew M Scott, Graeme O'Keefe
Zirconium-89 is presently undergoing pre-clinical investigation for its potential application as a positron emission tomography (PET) theranostic radioisotope. A critical consideration in the increasing number of trials and eventual clinical implementations is a comprehensive understanding of the radioactive waste byproducts and their quantification. This study focuses on the investigation and characterisation of the waste isotopes generated during the production of Zirconium-89, employing a combination of Geant4 Monte Carlo simulation and experimental methodologies utilising commercially obtainable starting materials from Thermofisher. Post cyclotron production samples of waste were taken and measured using a high purity germanium detector. Subsequent spectrum analysis consistently revealed the presence of the following isotopes in units of kBq per GBq of Zirconium-89 produced: cobalt-56 (13 ± 2, 14 ± 2, 15 ± 1), cobalt-57 (0.087 ± 0.004, 0.097 ± 0.004, 0.086 ± 0.007), rhenium-183 (2.61 ± 0.06, 3.29 ± 0.06, 2.47 ± 0.09), scandium-48 (27 ± 0.9, 21.1 ± 0.4), yttrium-88 (0.67 ± 0.06, 1.1 ± 0.4, 0.73 ± 0.06) and zirconium-88 (90 ± 5, 1340 ± 60, 35 ± 2). All the waste isotopes were able to reasonably be estimated using Geant4 Monte Carlo simulations or the deviation was able to be justified. The repeatability and predictability of isotopes and activities will enable informed decision-making regarding storage and disposal procedures in accordance with local legislative requirements.
{"title":"Monte Carlo prediction and experimental characterisation of long-lived waste byproducts arising from cyclotron production of zirconium-89 utilising a commercially available yttrium foil.","authors":"Andrew Chacon, Sylvia Gong, Artur Cichocki, Talia Enright, Harris Panopoulos, Nathan Sonnberger, Andrew M Scott, Graeme O'Keefe","doi":"10.1007/s13246-025-01630-1","DOIUrl":"10.1007/s13246-025-01630-1","url":null,"abstract":"<p><p>Zirconium-89 is presently undergoing pre-clinical investigation for its potential application as a positron emission tomography (PET) theranostic radioisotope. A critical consideration in the increasing number of trials and eventual clinical implementations is a comprehensive understanding of the radioactive waste byproducts and their quantification. This study focuses on the investigation and characterisation of the waste isotopes generated during the production of Zirconium-89, employing a combination of Geant4 Monte Carlo simulation and experimental methodologies utilising commercially obtainable starting materials from Thermofisher. Post cyclotron production samples of waste were taken and measured using a high purity germanium detector. Subsequent spectrum analysis consistently revealed the presence of the following isotopes in units of kBq per GBq of Zirconium-89 produced: cobalt-56 (13 ± 2, 14 ± 2, 15 ± 1), cobalt-57 (0.087 ± 0.004, 0.097 ± 0.004, 0.086 ± 0.007), rhenium-183 (2.61 ± 0.06, 3.29 ± 0.06, 2.47 ± 0.09), scandium-48 (27 ± 0.9, 21.1 ± 0.4), yttrium-88 (0.67 ± 0.06, 1.1 ± 0.4, 0.73 ± 0.06) and zirconium-88 (90 ± 5, 1340 ± 60, 35 ± 2). All the waste isotopes were able to reasonably be estimated using Geant4 Monte Carlo simulations or the deviation was able to be justified. The repeatability and predictability of isotopes and activities will enable informed decision-making regarding storage and disposal procedures in accordance with local legislative requirements.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":"1901-1910"},"PeriodicalIF":2.0,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12738615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144974832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}