Pub Date : 2025-11-03DOI: 10.1007/s13246-025-01661-8
Armin Ghasimi, Sina Shamekhi
Cognitive workload refers to the mental effort required to perform a task and plays a vital role in cognitive functioning and daily decision-making. The precise estimation of cognitive workload can increase efficiency and decrease mental errors. EEG signals are non-invasive and trustworthy, containing useful information about mental and cognitive tasks, and are very effective in measuring cognitive workload. This study aims to classify various cognitive workload levels using EEG signals, primarily by channel selection based on the Pearson Correlation Coefficient, to reduce computational complexity and facilitate real-time applications. As time-frequency decomposition techniques can provide simultaneous time and frequency information for more accurate analysis, three techniques were adopted: Maximal Overlap Discrete Wavelet Transform (MODWT), Empirical Mode Decomposition (EMD), and a hybrid approach combining both. After decomposition, ten statistical features were extracted, and the Improved Distance Evaluation technique was employed to select the most critical features. Classification was performed on these features using three classifiers: Support Vector Machine (SVM), K-Nearest Neighbors, and Decision Tree. The findings revealed the important role of frontal EEG channels in assessing cognitive workload. Additionally, the combined use of MODWT and EMD with the SVM classifier yielded the best classification accuracy for both binary and three-class classification scenarios. The results indicate that the optimal choice of channels, combined with time-frequency decomposition methods, can significantly enhance classification accuracy while reducing system complexity in estimating cognitive workload.
{"title":"Correlation-based channel selection for cognitive workload assessment and classification using EEG signals.","authors":"Armin Ghasimi, Sina Shamekhi","doi":"10.1007/s13246-025-01661-8","DOIUrl":"https://doi.org/10.1007/s13246-025-01661-8","url":null,"abstract":"<p><p>Cognitive workload refers to the mental effort required to perform a task and plays a vital role in cognitive functioning and daily decision-making. The precise estimation of cognitive workload can increase efficiency and decrease mental errors. EEG signals are non-invasive and trustworthy, containing useful information about mental and cognitive tasks, and are very effective in measuring cognitive workload. This study aims to classify various cognitive workload levels using EEG signals, primarily by channel selection based on the Pearson Correlation Coefficient, to reduce computational complexity and facilitate real-time applications. As time-frequency decomposition techniques can provide simultaneous time and frequency information for more accurate analysis, three techniques were adopted: Maximal Overlap Discrete Wavelet Transform (MODWT), Empirical Mode Decomposition (EMD), and a hybrid approach combining both. After decomposition, ten statistical features were extracted, and the Improved Distance Evaluation technique was employed to select the most critical features. Classification was performed on these features using three classifiers: Support Vector Machine (SVM), K-Nearest Neighbors, and Decision Tree. The findings revealed the important role of frontal EEG channels in assessing cognitive workload. Additionally, the combined use of MODWT and EMD with the SVM classifier yielded the best classification accuracy for both binary and three-class classification scenarios. The results indicate that the optimal choice of channels, combined with time-frequency decomposition methods, can significantly enhance classification accuracy while reducing system complexity in estimating cognitive workload.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145439688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-03DOI: 10.1007/s13246-025-01660-9
Syed Fawad Hussain, Saeed Mian Qaisar, Muhammad Sherjeel
Telehealthcare is an evolving area that typically employs cloud-connected wireless biomedical gadgets for diagnosis, monitoring, and prognosis of diseases. In such environment, data compression, transmission, security and processing effectiveness are key issues. This paper proposes a new method for the automated diagnosis of arrhythmia in an efficient and effective manner. The proposed technique fuses a combination of Level-Crossing Analog-Digital Converters (LCADCs), Enhanced Activity Selection Algorithm (EASA), Adaptive-Rate Filtering (ARF), and ID-CNN. The electrocardiogram (ECG) signal is sampled by using the level-crossing concept. The QRS based segmentation and ARF with lower tap filters are realized. The denoised segments, without any handcrafted features extraction, are classified with one dimensional (1-D) deep convolutional neural network (CNN). Comparison is performed with using statistically extracted features in combination with CNN, existing state-of-the-art classical methods for ECG classification, and recent advanced deep learning models. The goal is to reach an efficient method by attaining a real-time data size reduction, computationally efficient signal preconditioning and a lower latency accurate classification. Five clinically important classes of arrhythmias, collected from the MIT-BIH dataset, are used to examine its applicability. Our experimental results show a 4.2-times diminishing in the count of acquired samples, on average, compared to conventional fix-rate counterparts. Similarly, data dimension reduction results in a more than 7.2-times computational effectiveness of the post denoising stage over the conventional counterparts. Moreover, classification latency is also significantly reduced while still achieving an accuracy rate of 99%.
{"title":"Level-crossing processing and deep convolutional neural network for arrhythmia classification in telehealth services.","authors":"Syed Fawad Hussain, Saeed Mian Qaisar, Muhammad Sherjeel","doi":"10.1007/s13246-025-01660-9","DOIUrl":"https://doi.org/10.1007/s13246-025-01660-9","url":null,"abstract":"<p><p>Telehealthcare is an evolving area that typically employs cloud-connected wireless biomedical gadgets for diagnosis, monitoring, and prognosis of diseases. In such environment, data compression, transmission, security and processing effectiveness are key issues. This paper proposes a new method for the automated diagnosis of arrhythmia in an efficient and effective manner. The proposed technique fuses a combination of Level-Crossing Analog-Digital Converters (LCADCs), Enhanced Activity Selection Algorithm (EASA), Adaptive-Rate Filtering (ARF), and ID-CNN. The electrocardiogram (ECG) signal is sampled by using the level-crossing concept. The QRS based segmentation and ARF with lower tap filters are realized. The denoised segments, without any handcrafted features extraction, are classified with one dimensional (1-D) deep convolutional neural network (CNN). Comparison is performed with using statistically extracted features in combination with CNN, existing state-of-the-art classical methods for ECG classification, and recent advanced deep learning models. The goal is to reach an efficient method by attaining a real-time data size reduction, computationally efficient signal preconditioning and a lower latency accurate classification. Five clinically important classes of arrhythmias, collected from the MIT-BIH dataset, are used to examine its applicability. Our experimental results show a 4.2-times diminishing in the count of acquired samples, on average, compared to conventional fix-rate counterparts. Similarly, data dimension reduction results in a more than 7.2-times computational effectiveness of the post denoising stage over the conventional counterparts. Moreover, classification latency is also significantly reduced while still achieving an accuracy rate of 99%.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145439669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The increased use of CT has raised concerns about patient radiation exposure. DRLs play a crucial role in optimising radiation dose while maintaining diagnostic quality. In Jordan, the absence of officially established national DRLs across a wide range of CT procedures may contributes to dose variability between healthcare facilities.
Methods: A multicentre, retrospective study was conducted across 10 hospitals in Jordan, involving 4310 adult patients (aged 18-96 years). Radiation dose metrics, including volume CTDIvol and DLP, were collected from PACS and RIS. The proposed national DRLs were derived from the 75th percentile of the distribution of median CTDIvol and DLP values from each hospital. Stepwise multiple regression analysis was performed to identify factors contributing to dose variability.
Results: Marked dose variations were observed across hospitals. Head routine non-contrast CT demonstrated the highest median CTDIvol (65 mGy) and DLP (1572 mGy·cm), while high-resolution chest CT exhibited the lowest (CTDIvol: 12 mGy; DLP: 230 mGy·cm). The product of mAs was identified as the most significant predictor of dose across all CT examinations. When compared to international DRLs, Jordan's CT dose levels were generally within acceptable ranges, though L-spine CT showed higher than average values.
Conclusion: This study proposes the first national DRLs for 14 common CT examinations in Jordan, based on data collected from hospitals across the country. These benchmarks support dose optimisation, promote standardised protocols, and highlight the need for continuous radiographer training. Future initiatives should expand DRL development to paediatric populations and integrate dose tracking into national quality frameworks.
{"title":"Proposing computed tomography diagnostic reference levels in Jordan: a national multicentre analysis.","authors":"Abdel-Baset Bani Yaseen, Jamie Trapp, Davide Fontanarosa","doi":"10.1007/s13246-025-01667-2","DOIUrl":"https://doi.org/10.1007/s13246-025-01667-2","url":null,"abstract":"<p><strong>Background: </strong>The increased use of CT has raised concerns about patient radiation exposure. DRLs play a crucial role in optimising radiation dose while maintaining diagnostic quality. In Jordan, the absence of officially established national DRLs across a wide range of CT procedures may contributes to dose variability between healthcare facilities.</p><p><strong>Methods: </strong>A multicentre, retrospective study was conducted across 10 hospitals in Jordan, involving 4310 adult patients (aged 18-96 years). Radiation dose metrics, including volume CTDI<sub>vol</sub> and DLP, were collected from PACS and RIS. The proposed national DRLs were derived from the 75th percentile of the distribution of median CTDI<sub>vol</sub> and DLP values from each hospital. Stepwise multiple regression analysis was performed to identify factors contributing to dose variability.</p><p><strong>Results: </strong>Marked dose variations were observed across hospitals. Head routine non-contrast CT demonstrated the highest median CTDI<sub>vol</sub> (65 mGy) and DLP (1572 mGy·cm), while high-resolution chest CT exhibited the lowest (CTDI<sub>vol</sub>: 12 mGy; DLP: 230 mGy·cm). The product of mAs was identified as the most significant predictor of dose across all CT examinations. When compared to international DRLs, Jordan's CT dose levels were generally within acceptable ranges, though L-spine CT showed higher than average values.</p><p><strong>Conclusion: </strong>This study proposes the first national DRLs for 14 common CT examinations in Jordan, based on data collected from hospitals across the country. These benchmarks support dose optimisation, promote standardised protocols, and highlight the need for continuous radiographer training. Future initiatives should expand DRL development to paediatric populations and integrate dose tracking into national quality frameworks.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145402266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1007/s13246-025-01662-7
Arshia Eskandari, Sara Malek, Taha Samiazar, Aisa Rassoli, Mahkame Sharbatdar
An aneurysm, enlargement of an artery or vein, weakens the surrounding vascular wall, making it susceptible to rupture and the possibility of life-threatening bleeding, ultimately resulting in death. The placement of flow-diverting stents is a highly utilized and effective method for treating aneurysms. This study presents a novel approach combining CFD simulations, deep neural networks (DNN), and differential evolution optimization (DEO) to optimize hemodynamic conditions in aneurysms. Initially, CFD simulations were conducted to generate a comprehensive dataset of 2,700 simulations with various stent configurations. This dataset was then used to train a DNN model, enabling accurate predictions of velocity, vorticity, and wall shear stress for any stent configuration. The model demonstrated consistent and reliable performance across different configurations. DEO was applied to identify the optimal stent, resulting in a configuration with seven struts. The optimal strut sizes were 0.3184, 0.9599, 0.7889, 0.9599, 1.0073, 1.0073, and 2.9283, with gap sizes of 0.2238, 0.5897, 0.3379, 0.2996, 0.2052, 0.0371, and 0.3068 between the struts. This configuration achieved superior performance in reducing velocity, vorticity, and maximum wall shear stress. The study demonstrated that increasing the number of struts, with a concentration at the proximal aneurysm neck, enhanced flow diversion and minimized hemodynamic risks, especially in regions vulnerable to rupture. Validation through additional CFD simulations confirmed the effectiveness of the optimized stent, demonstrating the potential of the proposed methodology to improve stent design and hemodynamic outcomes in aneurysm treatment.
{"title":"Optimizing flow-diverting stent configurations for aneurysm treatment: a computational approach integrating deep learning and differential evolution optimization.","authors":"Arshia Eskandari, Sara Malek, Taha Samiazar, Aisa Rassoli, Mahkame Sharbatdar","doi":"10.1007/s13246-025-01662-7","DOIUrl":"https://doi.org/10.1007/s13246-025-01662-7","url":null,"abstract":"<p><p>An aneurysm, enlargement of an artery or vein, weakens the surrounding vascular wall, making it susceptible to rupture and the possibility of life-threatening bleeding, ultimately resulting in death. The placement of flow-diverting stents is a highly utilized and effective method for treating aneurysms. This study presents a novel approach combining CFD simulations, deep neural networks (DNN), and differential evolution optimization (DEO) to optimize hemodynamic conditions in aneurysms. Initially, CFD simulations were conducted to generate a comprehensive dataset of 2,700 simulations with various stent configurations. This dataset was then used to train a DNN model, enabling accurate predictions of velocity, vorticity, and wall shear stress for any stent configuration. The model demonstrated consistent and reliable performance across different configurations. DEO was applied to identify the optimal stent, resulting in a configuration with seven struts. The optimal strut sizes were 0.3184, 0.9599, 0.7889, 0.9599, 1.0073, 1.0073, and 2.9283, with gap sizes of 0.2238, 0.5897, 0.3379, 0.2996, 0.2052, 0.0371, and 0.3068 between the struts. This configuration achieved superior performance in reducing velocity, vorticity, and maximum wall shear stress. The study demonstrated that increasing the number of struts, with a concentration at the proximal aneurysm neck, enhanced flow diversion and minimized hemodynamic risks, especially in regions vulnerable to rupture. Validation through additional CFD simulations confirmed the effectiveness of the optimized stent, demonstrating the potential of the proposed methodology to improve stent design and hemodynamic outcomes in aneurysm treatment.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1007/s13246-025-01663-6
Guanfu Li, Chunyou Ye, Weiwei Chen, Peiyao Hao, Fang He, Jijun Han
Glioma is primarily treated through surgical resection, but accurately identifying tumor boundaries remains challenging. Traditional intraoperative diagnostic techniques, such as frozen section pathological examination and intraoperative magnetic resonance imaging, suffer from issues such as long duration, high cost, and complex operation. A rapid and accurate intraoperative auxiliary diagnostic method for glioma based on the differences in dielectric properties combined with machine learning is proposed in this study. Using an open-ended coaxial probe technique, the dielectric properties of 81 glioma tissue samples and 47 normal brain tissue samples from 14 patients were measured over a frequency range of 1 MHz-4 GHz. After feature selection and dimensionality reduction using the Lasso method, four machine learning models-Naive Bayes (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Artificial Neural Network (ANN)-were used to classify the samples. Model performance was evaluated using accuracy, precision, recall, F1 score, and the area under the Receiver Operating Characteristic curve (AUC value). The experimental results demonstrated that the dielectric properties of glioma tissues are higher than those of normal brain tissues (with an average increase of 22% in conductivity and 18% in relative permittivity). On the test set, the KNN model exhibited the highest classification accuracy (90%), while the ANN model showed the best AUC value (0.95). This study confirms that the rapid identification of glioma can be achieved based on dielectric properties combined with machine learning techniques, providing neurosurgeons with a novel auxiliary diagnostic technology for precise intraoperative margin detection of glioma.
{"title":"Measurement and classification of dielectric properties in human brain tissues: differentiating glioma from normal tissues using machine learning.","authors":"Guanfu Li, Chunyou Ye, Weiwei Chen, Peiyao Hao, Fang He, Jijun Han","doi":"10.1007/s13246-025-01663-6","DOIUrl":"https://doi.org/10.1007/s13246-025-01663-6","url":null,"abstract":"<p><p>Glioma is primarily treated through surgical resection, but accurately identifying tumor boundaries remains challenging. Traditional intraoperative diagnostic techniques, such as frozen section pathological examination and intraoperative magnetic resonance imaging, suffer from issues such as long duration, high cost, and complex operation. A rapid and accurate intraoperative auxiliary diagnostic method for glioma based on the differences in dielectric properties combined with machine learning is proposed in this study. Using an open-ended coaxial probe technique, the dielectric properties of 81 glioma tissue samples and 47 normal brain tissue samples from 14 patients were measured over a frequency range of 1 MHz-4 GHz. After feature selection and dimensionality reduction using the Lasso method, four machine learning models-Naive Bayes (NB), Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and Artificial Neural Network (ANN)-were used to classify the samples. Model performance was evaluated using accuracy, precision, recall, F1 score, and the area under the Receiver Operating Characteristic curve (AUC value). The experimental results demonstrated that the dielectric properties of glioma tissues are higher than those of normal brain tissues (with an average increase of 22% in conductivity and 18% in relative permittivity). On the test set, the KNN model exhibited the highest classification accuracy (90%), while the ANN model showed the best AUC value (0.95). This study confirms that the rapid identification of glioma can be achieved based on dielectric properties combined with machine learning techniques, providing neurosurgeons with a novel auxiliary diagnostic technology for precise intraoperative margin detection of glioma.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-27DOI: 10.1007/s13246-025-01665-4
Sen Yang, Youchi Zhang, Yingdu Liu, Haonan Li, Pengshuo Gan, Samuel Mungai, Pengwei Shu, Zhonghua Kuang, Ning Ren, Yongfeng Yang, Zheng Liu
A prototype Compton camera composed of two high resolution scintillator detectors is presented in this work. The scatterer detector consists of a 21 × 21 gadolinium aluminum gallium garnet (GAGG) crystal array with a crystal size of 0.6 × 0.6 × 2 mm3. The absorber detector consists of a 23 × 23 lutetium yttrium orthosilicate (LYSO) crystal array with a crystal size of 1.0 × 1.0 × 20 mm3. A simple back-projection image reconstruction method was developed. The energy of the scatterer detector was accurately calibrated using the 55, 202, 307 keV gamma-rays from the LYSO natural background and the 511 keV gamma-ray from a 22Na point source. The scatterer detector provides a performance with all crystals clearly resolved even at an energy window of 30-120 keV and an average crystal energy resolution of 10.4% at 511 keV. The absorber detector provides a performance with all crystals clearly resolved, an average crystal depth of interaction resolution of ~ 2 mm and an average crystal energy resolution of 19.4% at 511 keV. An average spatial resolution of 2.5 mm was obtained and 9 point sources of 3 mm apart were well resolved at an image plane 7.5 mm from the front of the scatterer detector by using the 511 keV gamma-rays from a 22Na point sources. Furthermore, iterative reconstruction using the maximum-likelihood expectation maximization (MLEM) algorithm achieved a spatial resolution of ~ 1 mm at a plane 7.5 mm from the front of the scatterer detector. Compared with the simple back-projection method, the MLEM reconstruction significantly enhanced the image contrast and effectively suppressed the background artifacts.
{"title":"Development of a prototype Compton camera consisting of high-resolution scintillator detectors.","authors":"Sen Yang, Youchi Zhang, Yingdu Liu, Haonan Li, Pengshuo Gan, Samuel Mungai, Pengwei Shu, Zhonghua Kuang, Ning Ren, Yongfeng Yang, Zheng Liu","doi":"10.1007/s13246-025-01665-4","DOIUrl":"https://doi.org/10.1007/s13246-025-01665-4","url":null,"abstract":"<p><p>A prototype Compton camera composed of two high resolution scintillator detectors is presented in this work. The scatterer detector consists of a 21 × 21 gadolinium aluminum gallium garnet (GAGG) crystal array with a crystal size of 0.6 × 0.6 × 2 mm<sup>3</sup>. The absorber detector consists of a 23 × 23 lutetium yttrium orthosilicate (LYSO) crystal array with a crystal size of 1.0 × 1.0 × 20 mm<sup>3</sup>. A simple back-projection image reconstruction method was developed. The energy of the scatterer detector was accurately calibrated using the 55, 202, 307 keV gamma-rays from the LYSO natural background and the 511 keV gamma-ray from a <sup>22</sup>Na point source. The scatterer detector provides a performance with all crystals clearly resolved even at an energy window of 30-120 keV and an average crystal energy resolution of 10.4% at 511 keV. The absorber detector provides a performance with all crystals clearly resolved, an average crystal depth of interaction resolution of ~ 2 mm and an average crystal energy resolution of 19.4% at 511 keV. An average spatial resolution of 2.5 mm was obtained and 9 point sources of 3 mm apart were well resolved at an image plane 7.5 mm from the front of the scatterer detector by using the 511 keV gamma-rays from a <sup>22</sup>Na point sources. Furthermore, iterative reconstruction using the maximum-likelihood expectation maximization (MLEM) algorithm achieved a spatial resolution of ~ 1 mm at a plane 7.5 mm from the front of the scatterer detector. Compared with the simple back-projection method, the MLEM reconstruction significantly enhanced the image contrast and effectively suppressed the background artifacts.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145379300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Separate renal function assessment is important in clinical decision making. The single-photon emission computed tomography is commonly used for the assessment although radioactive, tedious and of high cost. This study aimed to automatically assess the separate renal function using plain CT images and artificial intelligence methods, including deep learning-based automatic segmentation and radiomics modeling. We performed a retrospective study on 281 patients with nephrarctia or hydronephrosis from two centers (Training set: 159 patients from Center I; Test set: 122 patients from Center II). The renal parenchyma and hydronephrosis regions in plain CT images were automatically segmented using deep learning-based U-Net transformers (UNETR). Radiomic features were extracted from the two regions and used to build radiomic signature using the ElasticNet, then further combined with clinical characteristics using multivariable logistic regression to obtain an integrated model. The automatic segmentation was evaluated using the dice similarity coefficient (DSC). The mean DSC of automatic kidney segmentation based on UNETR was 0.894 and 0.881 in the training and test sets. The average time of automatic and manual segmentation was 3.4 s/case and 1477.9 s/case. The AUC of radiomic signature was 0.778 in the training set and 0.801 in the test set. The AUC of the integrated model was 0.792 and 0.825 in the training and test sets. It is feasible to assess the renal function of each kidney separately using plain CT and AI methods. Our method can minimize the radiation risk, improve the diagnostic efficiency and reduce the costs.
{"title":"Artificial intelligence-based method for renal function automatic assessment of each kidney using plain computed tomography (CT) scans.","authors":"Rongchang Guo, Wei Xia, Feng Xu, Yaotian Qian, Qiuyue Han, Daoying Geng, Xin Gao, Yiwei Wang","doi":"10.1007/s13246-025-01651-w","DOIUrl":"https://doi.org/10.1007/s13246-025-01651-w","url":null,"abstract":"<p><p>Separate renal function assessment is important in clinical decision making. The single-photon emission computed tomography is commonly used for the assessment although radioactive, tedious and of high cost. This study aimed to automatically assess the separate renal function using plain CT images and artificial intelligence methods, including deep learning-based automatic segmentation and radiomics modeling. We performed a retrospective study on 281 patients with nephrarctia or hydronephrosis from two centers (Training set: 159 patients from Center I; Test set: 122 patients from Center II). The renal parenchyma and hydronephrosis regions in plain CT images were automatically segmented using deep learning-based U-Net transformers (UNETR). Radiomic features were extracted from the two regions and used to build radiomic signature using the ElasticNet, then further combined with clinical characteristics using multivariable logistic regression to obtain an integrated model. The automatic segmentation was evaluated using the dice similarity coefficient (DSC). The mean DSC of automatic kidney segmentation based on UNETR was 0.894 and 0.881 in the training and test sets. The average time of automatic and manual segmentation was 3.4 s/case and 1477.9 s/case. The AUC of radiomic signature was 0.778 in the training set and 0.801 in the test set. The AUC of the integrated model was 0.792 and 0.825 in the training and test sets. It is feasible to assess the renal function of each kidney separately using plain CT and AI methods. Our method can minimize the radiation risk, improve the diagnostic efficiency and reduce the costs.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1007/s13246-025-01655-6
Giuseppe Prisco, Mario Cesarelli, Fabrizio Esposito, Antonella Santone, Paolo Gargiulo, Francesco Amato, Leandro Donisi
Work-related musculoskeletal disorders represent a significant occupational health issue. These disorders encompass a range of conditions resulting from specific risk factors associate to manual material handling such as: intensity, repetition, and duration. Over the years, several observational methodologies have been developed to assess biomechanical risk, but their limits depend mainly on clinicians' subjective assessment. For this reason, wearable sensors coupled with artificial intelligence have recently been integrated in the occupational ergonomic field. This study aimed to develop a new technological methodology-based on machine learning algorithms and inertial wearable sensors-able to automatically discriminate biomechanical risk associated with lifting loads. Ten healthy volunteers were enrolled in this study performing specific weight-lifting tasks wearing two inertial measurement units on the sternum and lumbar region. The acquired inertial signals were appropriately processed to extract several features in the time-domain and frequency-domain which have been used as input to several machine learning algorithms. Excellent results in discriminating biomechanical risk classes were obtained reaching accuracies and areas under the receiver operating characteristic curve above 86% and 95%, respectively. In addition, the sternum emerged as the most informative body landmark, while the mean absolute value was identified as the most informative feature. Future investigations on a larger study population could confirm the potential of the proposed automatic procedure to be used in the workplace in combination with well-established methodologies.
{"title":"An automatic approach to assess biomechanical risk using machine learning algorithms and inertial sensors.","authors":"Giuseppe Prisco, Mario Cesarelli, Fabrizio Esposito, Antonella Santone, Paolo Gargiulo, Francesco Amato, Leandro Donisi","doi":"10.1007/s13246-025-01655-6","DOIUrl":"https://doi.org/10.1007/s13246-025-01655-6","url":null,"abstract":"<p><p>Work-related musculoskeletal disorders represent a significant occupational health issue. These disorders encompass a range of conditions resulting from specific risk factors associate to manual material handling such as: intensity, repetition, and duration. Over the years, several observational methodologies have been developed to assess biomechanical risk, but their limits depend mainly on clinicians' subjective assessment. For this reason, wearable sensors coupled with artificial intelligence have recently been integrated in the occupational ergonomic field. This study aimed to develop a new technological methodology-based on machine learning algorithms and inertial wearable sensors-able to automatically discriminate biomechanical risk associated with lifting loads. Ten healthy volunteers were enrolled in this study performing specific weight-lifting tasks wearing two inertial measurement units on the sternum and lumbar region. The acquired inertial signals were appropriately processed to extract several features in the time-domain and frequency-domain which have been used as input to several machine learning algorithms. Excellent results in discriminating biomechanical risk classes were obtained reaching accuracies and areas under the receiver operating characteristic curve above 86% and 95%, respectively. In addition, the sternum emerged as the most informative body landmark, while the mean absolute value was identified as the most informative feature. Future investigations on a larger study population could confirm the potential of the proposed automatic procedure to be used in the workplace in combination with well-established methodologies.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145253278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Accurate differentiation between non-cancerous, benign, and malignant lung cancer remains a diagnostic challenge due to overlapping clinical and imaging characteristics. This study proposes a multimodal machine learning (ML) framework integrating positron emission tomography/computed tomography (PET/CT) anatomic-metabolic parameters, sarcopenia markers, and inflammatory biomarkers to enhance classification performance in lung cancer. A retrospective dataset of 222 patients was analyzed, including demographic variables, functional and morphometric sarcopenia indices, hematological inflammation markers, and PET/CT derived parameters such as maximum and mean standardized uptake value (SUVmax, SUVmean), metabolic tumor volume (MTV), total lesion glycolysis (TLG). Five ML algorithms-Logistic Regression, Multi-Layer Perceptron, Support Vector Machine, Extreme Gradient Boosting, and Random Forest-were evaluated using standardized performance metrics. Synthetic Minority Oversampling Technique was applied to balance class distributions. Feature importance analysis was conducted using the optimal model, and classification was repeated using the top 15 features. Among the models, Random Forest demonstrated superior predictive performance with a test accuracy of 96%, precision, recall, and F1-score of 0.96, and an average AUC of 0.99. Feature importance analysis revealed SUVmax, SUVmean, total lesion glycolysis, and skeletal muscle index as leading predictors. A secondary classification using only the top 15 features yielded even higher test accuracy (97%). These findings underscore the potential of integrating metabolic imaging, physical function, and biochemical inflammation markers in a non-invasive ML-based diagnostic pipeline. The proposed framework demonstrates high accuracy and generalizability and may serve as an effective clinical decision support tool in early lung cancer diagnosis and risk stratification.
{"title":"Machine learning-assisted classification of lung cancer: the role of sarcopenia, inflammatory biomarkers, and PET/CT anatomical-metabolic parameters.","authors":"Handan Tanyildizi-Kokkulunk, Goksel Alcin, Iffet Cavdar, Resit Akyel, Safak Yigit, Tuba Ciftci-Kusbeci, Gonul Caliskan","doi":"10.1007/s13246-025-01650-x","DOIUrl":"https://doi.org/10.1007/s13246-025-01650-x","url":null,"abstract":"<p><p>Accurate differentiation between non-cancerous, benign, and malignant lung cancer remains a diagnostic challenge due to overlapping clinical and imaging characteristics. This study proposes a multimodal machine learning (ML) framework integrating positron emission tomography/computed tomography (PET/CT) anatomic-metabolic parameters, sarcopenia markers, and inflammatory biomarkers to enhance classification performance in lung cancer. A retrospective dataset of 222 patients was analyzed, including demographic variables, functional and morphometric sarcopenia indices, hematological inflammation markers, and PET/CT derived parameters such as maximum and mean standardized uptake value (SUVmax, SUVmean), metabolic tumor volume (MTV), total lesion glycolysis (TLG). Five ML algorithms-Logistic Regression, Multi-Layer Perceptron, Support Vector Machine, Extreme Gradient Boosting, and Random Forest-were evaluated using standardized performance metrics. Synthetic Minority Oversampling Technique was applied to balance class distributions. Feature importance analysis was conducted using the optimal model, and classification was repeated using the top 15 features. Among the models, Random Forest demonstrated superior predictive performance with a test accuracy of 96%, precision, recall, and F1-score of 0.96, and an average AUC of 0.99. Feature importance analysis revealed SUVmax, SUVmean, total lesion glycolysis, and skeletal muscle index as leading predictors. A secondary classification using only the top 15 features yielded even higher test accuracy (97%). These findings underscore the potential of integrating metabolic imaging, physical function, and biochemical inflammation markers in a non-invasive ML-based diagnostic pipeline. The proposed framework demonstrates high accuracy and generalizability and may serve as an effective clinical decision support tool in early lung cancer diagnosis and risk stratification.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dual-energy computed tomography (DECT) generates virtual monochromatic images (VMI) and material decomposition images (MDI), facilitating enhanced tissue contrast and quantitative material assessment. However, the accuracy of these measurements may be influenced by object size due to beam hardening and associated spectral changes. To evaluate the impact of object size on the accuracy of iodine quantification and CT numbers in virtual monochromatic images (VMI) using split-filter dual-energy CT (SFDE), and to compare its performance with sequential acquisition dual-energy CT (SADE). CT scans were performed on phantoms with diameters ranging from 16 to 36 cm using both SFDE and SADE techniques. Virtual monochromatic images and material decomposition images were generated. CT numbers and iodine concentrations were measured from embedded iodine rods, and relative errors were calculated using the 16 cm phantom as a reference. CT numbers in VMI obtained from SFDE exhibited increasing variability with larger phantom sizes, particularly at both low and high energy levels. Iodine quantification errors with SFDE exceeded 10% in all phantom sizes and reached approximately 60% in the 36 cm phantom. In contrast, SADE consistently maintained measurement errors within 10%. Object size significantly influences the accuracy of CT numbers and iodine quantification using SFDE, with larger phantoms showing marked overestimation. These results suggest that careful interpretation is necessary when applying SFDE-based quantitative imaging in patients with larger object sizes.
{"title":"Accuracy of iodine quantification and CT numbers using split-filter dual-energy CT: influence of phantom diameter.","authors":"Masato Kiriki, Maiko Kishigami, Toshiyuki Sakai, Takahiro Minamoto","doi":"10.1007/s13246-025-01658-3","DOIUrl":"https://doi.org/10.1007/s13246-025-01658-3","url":null,"abstract":"<p><p>Dual-energy computed tomography (DECT) generates virtual monochromatic images (VMI) and material decomposition images (MDI), facilitating enhanced tissue contrast and quantitative material assessment. However, the accuracy of these measurements may be influenced by object size due to beam hardening and associated spectral changes. To evaluate the impact of object size on the accuracy of iodine quantification and CT numbers in virtual monochromatic images (VMI) using split-filter dual-energy CT (SFDE), and to compare its performance with sequential acquisition dual-energy CT (SADE). CT scans were performed on phantoms with diameters ranging from 16 to 36 cm using both SFDE and SADE techniques. Virtual monochromatic images and material decomposition images were generated. CT numbers and iodine concentrations were measured from embedded iodine rods, and relative errors were calculated using the 16 cm phantom as a reference. CT numbers in VMI obtained from SFDE exhibited increasing variability with larger phantom sizes, particularly at both low and high energy levels. Iodine quantification errors with SFDE exceeded 10% in all phantom sizes and reached approximately 60% in the 36 cm phantom. In contrast, SADE consistently maintained measurement errors within 10%. Object size significantly influences the accuracy of CT numbers and iodine quantification using SFDE, with larger phantoms showing marked overestimation. These results suggest that careful interpretation is necessary when applying SFDE-based quantitative imaging in patients with larger object sizes.</p>","PeriodicalId":48490,"journal":{"name":"Physical and Engineering Sciences in Medicine","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}