首页 > 最新文献

Computer methods and programs in biomedicine update最新文献

英文 中文
Multiscale guided attention network for optic disc segmentation of retinal images
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100180
A Z M Ehtesham Chowdhury , Andrew Mehnert , Graham Mann , William H. Morgan , Ferdous Sohel
Optic disc (OD) segmentation from retinal images is crucial for diagnosing, assessing, and tracking the progression of several sight-threatening diseases. This paper presents a deep machine-learning method for semantically segmenting OD from retinal images. The method is named multiscale guided attention network (MSGANet-OD), comprising encoders for extracting multiscale features and decoders for constructing segmentation maps from the extracted features. The decoder also includes a guided attention module that incorporates features related to structural, contextual, and illumination information to segment OD. A custom loss function is proposed to retain the optic disc's geometrical shape (i.e., elliptical) constraint and to alleviate the blood vessels' influence in the overlapping region between the OD and vessels. MSGANet-OD was trained and tested on an in-house clinical color retinal image dataset captured during ophthalmodynamometry as well as on several publicly available color fundus image datasets, e.g., DRISHTI-GS, RIM-ONE-r3, and REFUGE1. Experimental results show that MSGANet-OD achieved superior OD segmentation performance from ophthalmodynamometry images compared to widely used segmentation methods. Our method also achieved competitive results compared to state-of-the-art OD segmentation methods on public datasets. The proposed method can be used in automated systems to quantitatively assess optic nerve head abnormalities (e.g., glaucoma, optic disc neuropathy) and vascular changes in the OD region.
从视网膜图像中分割视盘(OD)对于诊断、评估和跟踪几种威胁视力的疾病的进展至关重要。本文介绍了一种从视网膜图像中语义分割视盘的深度机器学习方法。该方法被命名为多尺度引导注意力网络(MSGANet-OD),包括用于提取多尺度特征的编码器和用于从提取的特征中构建分割图的解码器。解码器还包括一个引导注意力模块,该模块结合与结构、上下文和光照信息相关的特征来分割 OD。为了保留视盘的几何形状(即椭圆形)约束,并减轻血管在视盘和血管重叠区域的影响,提出了一种自定义损失函数。MSGANet-OD 在眼动力测定时捕获的内部临床彩色视网膜图像数据集以及几个公开的彩色眼底图像数据集(如 DRISHTI-GS、RIM-ONE-r3 和 REFUGE1)上进行了训练和测试。实验结果表明,与广泛使用的眼动力计图像分割方法相比,MSGANet-OD 的外径分割性能更为出色。与公共数据集上最先进的外径分割方法相比,我们的方法也取得了具有竞争力的结果。所提出的方法可用于自动系统,定量评估视神经头异常(如青光眼、视盘神经病变)和外径区域的血管变化。
{"title":"Multiscale guided attention network for optic disc segmentation of retinal images","authors":"A Z M Ehtesham Chowdhury ,&nbsp;Andrew Mehnert ,&nbsp;Graham Mann ,&nbsp;William H. Morgan ,&nbsp;Ferdous Sohel","doi":"10.1016/j.cmpbup.2025.100180","DOIUrl":"10.1016/j.cmpbup.2025.100180","url":null,"abstract":"<div><div>Optic disc (OD) segmentation from retinal images is crucial for diagnosing, assessing, and tracking the progression of several sight-threatening diseases. This paper presents a deep machine-learning method for semantically segmenting OD from retinal images. The method is named multiscale guided attention network (MSGANet-OD), comprising encoders for extracting multiscale features and decoders for constructing segmentation maps from the extracted features. The decoder also includes a guided attention module that incorporates features related to structural, contextual, and illumination information to segment OD. A custom loss function is proposed to retain the optic disc's geometrical shape (i.e., elliptical) constraint and to alleviate the blood vessels' influence in the overlapping region between the OD and vessels. MSGANet-OD was trained and tested on an in-house clinical color retinal image dataset captured during ophthalmodynamometry as well as on several publicly available color fundus image datasets, e.g., DRISHTI-GS, RIM-ONE-r3, and REFUGE1. Experimental results show that MSGANet-OD achieved superior OD segmentation performance from ophthalmodynamometry images compared to widely used segmentation methods. Our method also achieved competitive results compared to state-of-the-art OD segmentation methods on public datasets. The proposed method can be used in automated systems to quantitatively assess optic nerve head abnormalities (e.g., glaucoma, optic disc neuropathy) and vascular changes in the OD region.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100180"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive analysis of clinical features for HPV status in oropharynx squamous cell carcinoma: A machine learning approach with explainability 口咽鳞癌 HPV 状态的临床特征预测分析:一种具有可解释性的机器学习方法
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2024.100170
Emily Diaz Badilla , Ignasi Cos , Claudio Sampieri , Berta Alegre , Isabel Vilaseca , Simone Balocco , Petia Radeva

Background and Objective:

Oropharynx Squamous Cell Carcinoma (OPSCC) linked to Human Papillomavirus (HPV) exhibits a more favorable prognosis than other squamous cell carcinomas of the upper aerodigestive tract. Finding reliable non-invasive detection methods of this prognostic entity is key to propose appropriate therapeutic decisions. This study aims to provide a comprehensive method based on pre-treatment clinical data for predicting the patient’s HPV status over a large OPSCC patient cohort and employing explainability techniques to interpret the significance and effects of the features.

Materials and Methods:

We employed the RADCURE dataset clinical information to train six Machine Learning algorithms, evaluating them via cross-validation for grid search hyper-parameter tuning and feature selection as well as a final performance measurement on a 20% sample test set. For explainability, SHAP and LIME were used to identify the most relevant relationships and their effect on the predictive model. Furthermore, additional publicly available datasets were scrutinized to compare outcomes and assess the method’s generalization across diverse feature sets and populations.

Results:

The best model yielded an AUC of 0.85, a sensitivity of 0.83, and a specificity of 0.75 over the testing set. The explainability analysis highlighted the remarkable significance of specific clinical attributes, in particular the oropharynx subsite tumor location and the patient’s smoking history. The contribution of each variable to the prediction was substantiated by creating a 95% confidence intervals of model coefficients by means of a 10,000 sample bootstrap and by analyzing top contributors across the best-performing models.

Conclusions:

The combination of specific clinical factors typically collected for OPSCC patients, such as smoking habits and the tumor oropharynx sub-location, along with the ML models hereby presented, can by themselves provide an informed analysis of the HPV status, and of proper use of data science techniques to explain it. Future work should focus on adding other data modalities such as CT scans to enhance performance and to uncover new relations, thus aiding medical practitioners in diagnosing OPSCC more accurately.
{"title":"Predictive analysis of clinical features for HPV status in oropharynx squamous cell carcinoma: A machine learning approach with explainability","authors":"Emily Diaz Badilla ,&nbsp;Ignasi Cos ,&nbsp;Claudio Sampieri ,&nbsp;Berta Alegre ,&nbsp;Isabel Vilaseca ,&nbsp;Simone Balocco ,&nbsp;Petia Radeva","doi":"10.1016/j.cmpbup.2024.100170","DOIUrl":"10.1016/j.cmpbup.2024.100170","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Oropharynx Squamous Cell Carcinoma (OPSCC) linked to Human Papillomavirus (HPV) exhibits a more favorable prognosis than other squamous cell carcinomas of the upper aerodigestive tract. Finding reliable non-invasive detection methods of this prognostic entity is key to propose appropriate therapeutic decisions. This study aims to provide a comprehensive method based on pre-treatment clinical data for predicting the patient’s HPV status over a large OPSCC patient cohort and employing explainability techniques to interpret the significance and effects of the features.</div></div><div><h3>Materials and Methods:</h3><div>We employed the RADCURE dataset clinical information to train six Machine Learning algorithms, evaluating them via cross-validation for grid search hyper-parameter tuning and feature selection as well as a final performance measurement on a 20% sample test set. For explainability, SHAP and LIME were used to identify the most relevant relationships and their effect on the predictive model. Furthermore, additional publicly available datasets were scrutinized to compare outcomes and assess the method’s generalization across diverse feature sets and populations.</div></div><div><h3>Results:</h3><div>The best model yielded an AUC of 0.85, a sensitivity of 0.83, and a specificity of 0.75 over the testing set. The explainability analysis highlighted the remarkable significance of specific clinical attributes, in particular the oropharynx subsite tumor location and the patient’s smoking history. The contribution of each variable to the prediction was substantiated by creating a 95% confidence intervals of model coefficients by means of a 10,000 sample bootstrap and by analyzing top contributors across the best-performing models.</div></div><div><h3>Conclusions:</h3><div>The combination of specific clinical factors typically collected for OPSCC patients, such as smoking habits and the tumor oropharynx sub-location, along with the ML models hereby presented, can by themselves provide an informed analysis of the HPV status, and of proper use of data science techniques to explain it. Future work should focus on adding other data modalities such as CT scans to enhance performance and to uncover new relations, thus aiding medical practitioners in diagnosing OPSCC more accurately.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100170"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GLAAM and GLAAI: Pioneering attention models for robust automated cataract detection
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100182
Deepak Kumar , Chaman Verma , Zoltán Illés

Background and Objective:

Early detection of eye diseases, especially cataracts, is essential for preventing vision impairment. Accurate and cost-effective cataract diagnosis often requires advanced methods. This study proposes novel deep learning models that integrate global and local attention mechanisms into MobileNet and InceptionV3 architectures to improve cataract detection from fundus images.

Methods:

Two deep learning models, Global–Local Attention Augmented MobileNet (GLAAM) and Global–Local Attention Augmented InceptionV3 (GLAAI), were developed to enhance the analysis of fundus images. The models incorporate a combined attention mechanism to effectively capture deteriorated regions in retinal images. Data augmentation techniques were employed to prevent overfitting during training and testing on two cataract datasets. Additionally, Grad-CAM visualizations were used to increase interpretability by highlighting key regions influencing predictions.

Results:

The GLAAM model achieved a balanced accuracy of 97.08%, an average precision of 97.11%, and an F1-score of 97.12% on the retinal dataset. Grad-CAM visualizations confirmed the models’ ability to identify crucial cataract-related regions in fundus images.

Conclusion:

This study demonstrates a significant advancement in cataract diagnosis using deep learning, with GLAAM and GLAAI models exhibiting strong diagnostic performance. These models have the potential to enhance diagnostic tools and improve patient care by offering a cost-effective and accurate solution for cataract detection, suitable for integration into clinical settings.
背景和目的:早期发现眼疾,尤其是白内障,对于预防视力损伤至关重要。准确且经济高效的白内障诊断通常需要先进的方法。本研究提出了新颖的深度学习模型,将全局和局部注意力机制整合到 MobileNet 和 InceptionV3 架构中,以改进眼底图像的白内障检测。这些模型结合了联合注意力机制,可有效捕捉视网膜图像中的恶化区域。在两个白内障数据集的训练和测试过程中,采用了数据增强技术来防止过度拟合。结果:在视网膜数据集上,GLAAM 模型的均衡准确率达到了 97.08%,平均精确率达到了 97.11%,F1 分数达到了 97.12%。Grad-CAM 可视化证实了模型识别眼底图像中关键白内障相关区域的能力。这些模型具有增强诊断工具和改善患者护理的潜力,可为白内障检测提供具有成本效益的准确解决方案,适合集成到临床环境中。
{"title":"GLAAM and GLAAI: Pioneering attention models for robust automated cataract detection","authors":"Deepak Kumar ,&nbsp;Chaman Verma ,&nbsp;Zoltán Illés","doi":"10.1016/j.cmpbup.2025.100182","DOIUrl":"10.1016/j.cmpbup.2025.100182","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Early detection of eye diseases, especially cataracts, is essential for preventing vision impairment. Accurate and cost-effective cataract diagnosis often requires advanced methods. This study proposes novel deep learning models that integrate global and local attention mechanisms into MobileNet and InceptionV3 architectures to improve cataract detection from fundus images.</div></div><div><h3>Methods:</h3><div>Two deep learning models, Global–Local Attention Augmented MobileNet (GLAAM) and Global–Local Attention Augmented InceptionV3 (GLAAI), were developed to enhance the analysis of fundus images. The models incorporate a combined attention mechanism to effectively capture deteriorated regions in retinal images. Data augmentation techniques were employed to prevent overfitting during training and testing on two cataract datasets. Additionally, Grad-CAM visualizations were used to increase interpretability by highlighting key regions influencing predictions.</div></div><div><h3>Results:</h3><div>The GLAAM model achieved a balanced accuracy of 97.08%, an average precision of 97.11%, and an F1-score of 97.12% on the retinal dataset. Grad-CAM visualizations confirmed the models’ ability to identify crucial cataract-related regions in fundus images.</div></div><div><h3>Conclusion:</h3><div>This study demonstrates a significant advancement in cataract diagnosis using deep learning, with GLAAM and GLAAI models exhibiting strong diagnostic performance. These models have the potential to enhance diagnostic tools and improve patient care by offering a cost-effective and accurate solution for cataract detection, suitable for integration into clinical settings.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100182"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143474173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resectograms: Planning liver surgery with real-time occlusion-free visualization of virtual resections 切除图:通过虚拟切除的实时无闭塞可视化规划肝脏手术
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100186
Ruoyan Meng , Davit Aghayan , Egidijus Pelanis , Bjørn Edwin , Faouzi Alaya Cheikh , Rafael Palomar

Background and Objective:

Visualization of virtual resections plays a central role in computer-assisted liver surgery planning. However, the intricate liver anatomical information often results in occlusions and visualization information clutter, which can lead to inaccuracies in virtual resections. To overcome these challenges, we introduce Resectograms, which are planar (2D) representations of virtual resections enabling the visualization of information associated with the surgical plan.

Methods:

Resectograms are computed in real-time and displayed as additional 2D views showing anatomical, functional, and risk-associated information extracted from the 3D virtual resection as this is modified during planning, offering surgeons an occlusion-free visualization of the virtual resection during surgery planning. To further improve functionality, we explored three flattening methods: fixed-shape, Least Squares Conformal Maps, and As-Rigid-As-Possible, to generate these 2D views. Additionally, we optimized GPU memory usage by downsampling texture objects, ensuring errors remain within acceptable limits as defined by surgeons.

Results:

We evaluated Resectograms with experienced surgeons (n = 4, 9-15 years) and assessed 2D flattening methods with computer and biomedical scientists (n = 11) through visual experiments. Surgeons found Resectograms valuable for enhancing surgical planning effectiveness and accuracy. Among flattening methods, Least Squares Conformal Maps and As-Rigid-As-Possible techniques demonstrated similarly low distortion levels, superior to the fixed-shape approach. Our analysis of texture object downsampling revealed effectiveness for liver and tumor segmentations, but less so for vessel segmentations.

Conclusions:

This paper presents Resectograms, a novel method for visualizing liver virtual resection plans in 2D, offering an intuitive, occlusion-free representation computable in real-time. Resectograms incorporate multiple information layers, providing comprehensive data for liver surgery planning. We enhanced the visualization through improved 3D-to-2D orientation mapping and distortion-minimizing parameterization algorithms. This research contributes to advancing liver surgery planning tools by offering a more accessible and informative visualization method. The code repository for this work is available at: https://github.com/ALive-research/Slicer-Liver.
{"title":"Resectograms: Planning liver surgery with real-time occlusion-free visualization of virtual resections","authors":"Ruoyan Meng ,&nbsp;Davit Aghayan ,&nbsp;Egidijus Pelanis ,&nbsp;Bjørn Edwin ,&nbsp;Faouzi Alaya Cheikh ,&nbsp;Rafael Palomar","doi":"10.1016/j.cmpbup.2025.100186","DOIUrl":"10.1016/j.cmpbup.2025.100186","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Visualization of virtual resections plays a central role in computer-assisted liver surgery planning. However, the intricate liver anatomical information often results in occlusions and visualization information clutter, which can lead to inaccuracies in virtual resections. To overcome these challenges, we introduce <em>Resectograms</em>, which are planar (2D) representations of virtual resections enabling the visualization of information associated with the surgical plan.</div></div><div><h3>Methods:</h3><div>Resectograms are computed in real-time and displayed as additional 2D views showing anatomical, functional, and risk-associated information extracted from the 3D virtual resection as this is modified during planning, offering surgeons an occlusion-free visualization of the virtual resection during surgery planning. To further improve functionality, we explored three flattening methods: fixed-shape, Least Squares Conformal Maps, and As-Rigid-As-Possible, to generate these 2D views. Additionally, we optimized GPU memory usage by downsampling texture objects, ensuring errors remain within acceptable limits as defined by surgeons.</div></div><div><h3>Results:</h3><div>We evaluated Resectograms with experienced surgeons (n = 4, 9-15 years) and assessed 2D flattening methods with computer and biomedical scientists (n = 11) through visual experiments. Surgeons found Resectograms valuable for enhancing surgical planning effectiveness and accuracy. Among flattening methods, Least Squares Conformal Maps and As-Rigid-As-Possible techniques demonstrated similarly low distortion levels, superior to the fixed-shape approach. Our analysis of texture object downsampling revealed effectiveness for liver and tumor segmentations, but less so for vessel segmentations.</div></div><div><h3>Conclusions:</h3><div>This paper presents Resectograms, a novel method for visualizing liver virtual resection plans in 2D, offering an intuitive, occlusion-free representation computable in real-time. Resectograms incorporate multiple information layers, providing comprehensive data for liver surgery planning. We enhanced the visualization through improved 3D-to-2D orientation mapping and distortion-minimizing parameterization algorithms. This research contributes to advancing liver surgery planning tools by offering a more accessible and informative visualization method. The code repository for this work is available at: <span><span>https://github.com/ALive-research/Slicer-Liver</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100186"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143518926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ACD-ML: Advanced CKD detection using machine learning: A tri-phase ensemble and multi-layered stacking and blending approach ACD-ML:利用机器学习的高级 CKD 检测:三阶段集合和多层堆叠混合方法
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2024.100173
Mir Faiyaz Hossain, Shajreen Tabassum Diya, Riasat Khan
Chronic Kidney Disease (CKD), the gradual loss and irreversible damage of the kidney’s functionality, is one of the leading contributors to death and causes about 1.3 million people to die annually. It is extremely important to slow down the progression of kidney deterioration to prevent kidney dialysis or transplant. This study aims to leverage machine learning algorithms and ensemble models for early detection of CKD using the “Chronic Kidney Disease (CKD15)” and “Risk Factor Prediction of Chronic Kidney Disease (CKD21)” datasets from the UCI Machine Learning Repository. Two encoding techniques are introduced to combine the datasets, i.e., Discrete Encoding and Ranged Encoding, resulting in Discrete Merged and Ranged Merged datasets. The preprocessing stage employs normalization, class balancing with synthetic oversampling, and five feature selection techniques, including RFECV and Pearson Correlation. This work proposes a novel Tri-phase Ensemble technique combining Voting, Bagging, and Stacking approaches and two other ensemble models: Multi-layer Stacking and Multi-layer Blending classifiers. The investigation reveals that, for the Discrete Merged dataset, the novel Tri-phase Ensemble and Multi-layer Stacking with layers interchanged achieves an accuracy of 99.5%. For the Ranged Merged dataset, AdaBoost attains an accuracy of 97.5%. Logistic Regression accomplishes an accuracy of 99.5% in validating with the discrete dataset, whereas for validating with the ranged dataset, both Random Forest and SVM achieve 100% accuracy. Finally, to interpret and understand the behavior and prediction of the model, a LIME explainer has been utilized.
{"title":"ACD-ML: Advanced CKD detection using machine learning: A tri-phase ensemble and multi-layered stacking and blending approach","authors":"Mir Faiyaz Hossain,&nbsp;Shajreen Tabassum Diya,&nbsp;Riasat Khan","doi":"10.1016/j.cmpbup.2024.100173","DOIUrl":"10.1016/j.cmpbup.2024.100173","url":null,"abstract":"<div><div>Chronic Kidney Disease (CKD), the gradual loss and irreversible damage of the kidney’s functionality, is one of the leading contributors to death and causes about 1.3 million people to die annually. It is extremely important to slow down the progression of kidney deterioration to prevent kidney dialysis or transplant. This study aims to leverage machine learning algorithms and ensemble models for early detection of CKD using the “Chronic Kidney Disease (CKD15)” and “Risk Factor Prediction of Chronic Kidney Disease (CKD21)” datasets from the UCI Machine Learning Repository. Two encoding techniques are introduced to combine the datasets, i.e., Discrete Encoding and Ranged Encoding, resulting in Discrete Merged and Ranged Merged datasets. The preprocessing stage employs normalization, class balancing with synthetic oversampling, and five feature selection techniques, including RFECV and Pearson Correlation. This work proposes a novel Tri-phase Ensemble technique combining Voting, Bagging, and Stacking approaches and two other ensemble models: Multi-layer Stacking and Multi-layer Blending classifiers. The investigation reveals that, for the Discrete Merged dataset, the novel Tri-phase Ensemble and Multi-layer Stacking with layers interchanged achieves an accuracy of 99.5%. For the Ranged Merged dataset, AdaBoost attains an accuracy of 97.5%. Logistic Regression accomplishes an accuracy of 99.5% in validating with the discrete dataset, whereas for validating with the ranged dataset, both Random Forest and SVM achieve 100% accuracy. Finally, to interpret and understand the behavior and prediction of the model, a LIME explainer has been utilized.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100173"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative approach of analyzing data uncertainty in parameter estimation for a Lumpy Skin Disease model
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100178
Edwiga Renald , Miracle Amadi , Heikki Haario , Joram Buza , Jean M. Tchuenche , Verdiana G. Masanja
The livestock industry has been economically affected by the emergence and reemergence of infectious diseases such as Lumpy Skin Disease (LSD). This has driven the interest to research efficient mitigating measures towards controlling the transmission of LSD. Mathematical models of real-life systems inherit loss of information, and consequently, accuracy of their results is often complicated by the presence of uncertainties in data used to estimate parameter values. There is a need for models with knowledge about the confidence of their long-term predictions. This study has introduced a novel yet simple technique for analyzing data uncertainties in compartmental models which is then used to examine the reliability of a deterministic model of the transmission dynamics of LSD in cattle which involves investigating scenarios related to data quality for which the model parameters can be well identified. The assessment of the uncertainties is determined with the help of Adaptive Metropolis Hastings algorithm, a Markov Chain Monte Carlo (MCMC) standard statistical method. Simulation results with synthetic cases show that the model parameters are identifiable with a reasonable amount of synthetic noise, and enough data points spanning through the model classes. MCMC outcomes derived from synthetic data, generated to mimic the characteristics of the real dataset, significantly surpassed those obtained from actual data in terms of uncertainties in identifying parameters and making predictions. This approach could serve as a guide for obtaining informative real data, and adapted to target key interventions when using routinely collected data to investigate long-term transmission dynamic of a disease.
结节性皮肤病(LSD)等传染病的出现和再次出现对畜牧业造成了经济影响。因此,人们开始关注研究有效的缓解措施,以控制 LSD 的传播。现实生活系统的数学模型继承了信息损失,因此,其结果的准确性往往因用于估计参数值的数据存在不确定性而变得复杂。因此,需要对模型的长期预测置信度有所了解。本研究引入了一种新颖而简单的技术,用于分析分区模型中的数据不确定性,然后用于检验牛群中 LSD 传播动态确定性模型的可靠性,其中涉及调查与数据质量有关的情况,而模型参数可以很好地确定。对不确定性的评估是在自适应 Metropolis Hastings 算法(一种马尔可夫链蒙特卡罗 (MCMC) 标准统计方法)的帮助下确定的。合成案例的模拟结果表明,模型参数在合理的合成噪声量和足够多的数据点跨越模型类别的情况下是可以识别的。从模拟真实数据集特征生成的合成数据中得出的 MCMC 结果,在确定参数和进行预测的不确定性方面,大大超过了从实际数据中得出的结果。这种方法可作为获取翔实真实数据的指南,在使用常规收集的数据研究疾病的长期传播动态时,可将其调整为有针对性的关键干预措施。
{"title":"A comparative approach of analyzing data uncertainty in parameter estimation for a Lumpy Skin Disease model","authors":"Edwiga Renald ,&nbsp;Miracle Amadi ,&nbsp;Heikki Haario ,&nbsp;Joram Buza ,&nbsp;Jean M. Tchuenche ,&nbsp;Verdiana G. Masanja","doi":"10.1016/j.cmpbup.2025.100178","DOIUrl":"10.1016/j.cmpbup.2025.100178","url":null,"abstract":"<div><div>The livestock industry has been economically affected by the emergence and reemergence of infectious diseases such as Lumpy Skin Disease (LSD). This has driven the interest to research efficient mitigating measures towards controlling the transmission of LSD. Mathematical models of real-life systems inherit loss of information, and consequently, accuracy of their results is often complicated by the presence of uncertainties in data used to estimate parameter values. There is a need for models with knowledge about the confidence of their long-term predictions. This study has introduced a novel yet simple technique for analyzing data uncertainties in compartmental models which is then used to examine the reliability of a deterministic model of the transmission dynamics of LSD in cattle which involves investigating scenarios related to data quality for which the model parameters can be well identified. The assessment of the uncertainties is determined with the help of Adaptive Metropolis Hastings algorithm, a Markov Chain Monte Carlo (MCMC) standard statistical method. Simulation results with synthetic cases show that the model parameters are identifiable with a reasonable amount of synthetic noise, and enough data points spanning through the model classes. MCMC outcomes derived from synthetic data, generated to mimic the characteristics of the real dataset, significantly surpassed those obtained from actual data in terms of uncertainties in identifying parameters and making predictions. This approach could serve as a guide for obtaining informative real data, and adapted to target key interventions when using routinely collected data to investigate long-term transmission dynamic of a disease.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100178"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A sustainable neuromorphic framework for disease diagnosis using digital medical imaging
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2024.100171
Rutwik Gulakala, Marcus Stoffel

Background and objective:

In the diagnosis of medical images, neural network classifications can support rapid diagnosis together with existing imaging methods. Although current state-of-the-art deep learning methods can contribute to this image recognition, the aim of the present study is to develop a general classification framework with brain-inspired neural networks. Following this intention, spiking neural network models, also known as third-generation models, are included here to capitalize on their sparse characteristics and capacity to significantly decrease energy consumption. Inspired by the recent development of neuromorphic hardware, a sustainable neural network framework is proposed, leading to an energy reduction down to a thousandth compared to the current state-of-the-art second-generation counterpart of artificial neural networks. Making use of sparse signal transmissions as in the human brain, a neuromorphic algorithm for imaging diagnostics is introduced.

Methods:

A novel, sustainable, brain-inspired spiking neural network is proposed to perform the multi-class classification of digital medical images. The framework comprises branched and densely connected layers described by a Leaky-Integrate and Fire (LIF) neuron model. Backpropagation of discontinuous spiking activations in the forward pass is achieved by surrogate gradients, in this case, fast sigmoid. The data for the spiking neural network is encoded into binary spikes with a latency encoding strategy. The proposed model is evaluated on a publicly available dataset of digital X-rays of chest and compared with an equivalent classical neural network. The models are trained using enhanced and pre-processed X-ray images and are evaluated based on classification metrics.

Results:

The proposed neuromorphic framework had an extremely high classification accuracy of 99.22% on an unseen test set, together with high precision and recall figures. The framework achieves this accuracy, all the while consuming 1000 times less electrical power than classical neural network architectures.

Conclusion:

Though there is a loss of information due to encoding, the proposed neuromorphic framework has achieved accuracy close to its second-generation counterpart. Therefore, the benefit of the proposed framework is the high accuracy of classification while consuming a thousandth of the power, enabling a sustainable and accessible add-on for the available diagnostic tools, such as medical imaging equipment, to achieve rapid diagnosis.
背景与目的:在医学图像诊断中,神经网络分类可以与现有的成像方法一起支持快速诊断。虽然目前最先进的深度学习方法可以为这种图像识别做出贡献,但本研究的目的是利用脑启发神经网络开发一种通用分类框架。根据这一意图,这里采用了尖峰神经网络模型(也称为第三代模型),以利用其稀疏特性和能力来显著降低能耗。受最近神经形态硬件发展的启发,我们提出了一种可持续的神经网络框架,与目前最先进的第二代人工神经网络相比,能耗降低了千分之一。方法:提出了一种新型、可持续、受大脑启发的尖峰神经网络,用于执行数字医学图像的多级分类。该框架由分支层和密集连接层组成,这些层由泄漏-整合-发射(LIF)神经元模型描述。前向传递中不连续尖峰激活的反向传播是通过替代梯度实现的,在本例中是快速西格玛梯度。尖峰神经网络的数据通过延迟编码策略编码为二进制尖峰。我们在一个公开的胸部数字 X 光片数据集上对所提出的模型进行了评估,并将其与等效的经典神经网络进行了比较。结果:所提出的神经形态框架在未见测试集上的分类准确率高达 99.22%,而且精确度和召回率也很高。结论:虽然编码会造成信息损失,但所提出的神经形态框架达到了接近第二代框架的准确度。因此,所提框架的优势在于分类准确度高,而功耗仅为传统神经网络架构的千分之一,可为现有诊断工具(如医疗成像设备)提供可持续、可访问的附加功能,实现快速诊断。
{"title":"A sustainable neuromorphic framework for disease diagnosis using digital medical imaging","authors":"Rutwik Gulakala,&nbsp;Marcus Stoffel","doi":"10.1016/j.cmpbup.2024.100171","DOIUrl":"10.1016/j.cmpbup.2024.100171","url":null,"abstract":"<div><h3>Background and objective:</h3><div>In the diagnosis of medical images, neural network classifications can support rapid diagnosis together with existing imaging methods. Although current state-of-the-art deep learning methods can contribute to this image recognition, the aim of the present study is to develop a general classification framework with brain-inspired neural networks. Following this intention, spiking neural network models, also known as third-generation models, are included here to capitalize on their sparse characteristics and capacity to significantly decrease energy consumption. Inspired by the recent development of neuromorphic hardware, a sustainable neural network framework is proposed, leading to an energy reduction down to a thousandth compared to the current state-of-the-art second-generation counterpart of artificial neural networks. Making use of sparse signal transmissions as in the human brain, a neuromorphic algorithm for imaging diagnostics is introduced.</div></div><div><h3>Methods:</h3><div>A novel, sustainable, brain-inspired spiking neural network is proposed to perform the multi-class classification of digital medical images. The framework comprises branched and densely connected layers described by a Leaky-Integrate and Fire (LIF) neuron model. Backpropagation of discontinuous spiking activations in the forward pass is achieved by surrogate gradients, in this case, fast sigmoid. The data for the spiking neural network is encoded into binary spikes with a latency encoding strategy. The proposed model is evaluated on a publicly available dataset of digital X-rays of chest and compared with an equivalent classical neural network. The models are trained using enhanced and pre-processed X-ray images and are evaluated based on classification metrics.</div></div><div><h3>Results:</h3><div>The proposed neuromorphic framework had an extremely high classification accuracy of 99.22<span><math><mtext>%</mtext></math></span> on an unseen test set, together with high precision and recall figures. The framework achieves this accuracy, all the while consuming 1000 times less electrical power than classical neural network architectures.</div></div><div><h3>Conclusion:</h3><div>Though there is a loss of information due to encoding, the proposed neuromorphic framework has achieved accuracy close to its second-generation counterpart. Therefore, the benefit of the proposed framework is the high accuracy of classification while consuming a thousandth of the power, enabling a sustainable and accessible add-on for the available diagnostic tools, such as medical imaging equipment, to achieve rapid diagnosis.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100171"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature selection based on Mahalanobis distance for early Parkinson disease classification
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100177
Mustafa Noaman Kadhim , Dhiah Al-Shammary , Ahmed M. Mahdi , Ayman Ibaida
Standard classifiers struggle with high-dimensional datasets due to increased computational complexity, difficulty in visualization and interpretation, and challenges in handling redundant or irrelevant features. This paper proposes a novel feature selection method based on the Mahalanobis distance for Parkinson's disease (PD) classification. The proposed feature selection identifies relevant features by measuring their distance from the dataset's mean vector, considering the covariance structure. Features with larger Mahalanobis distances are deemed more relevant as they exhibit greater discriminative power relative to the dataset's distribution, aiding in effective feature subset selection. Significant improvements in classification performance were observed across all models. On the "Parkinson Disease Classification Dataset", the feature set was reduced from 22 to 11 features, resulting in accuracy improvements ranging from 10.17 % to 20.34 %, with the K-Nearest Neighbors (KNN) classifier achieving the highest accuracy of 98.31 %. Similarly, on the "Parkinson Dataset with Replicated Acoustic Features", the feature set was reduced from 45 to 18 features, achieving accuracy improvements ranging from 1.38 % to 13.88 %, with the Random Forest (RF) classifier achieving the best accuracy of 95.83 %. By identifying convergence features and eliminating divergence features, the proposed method effectively reduces dimensionality while maintaining or improving classifier performance. Additionally, the proposed feature selection method significantly reduces execution time, making it highly suitable for real-time applications in medical diagnostics, where timely and accurate disease identification is critical for improving patient outcomes.
由于计算复杂度增加、可视化和解释困难以及处理冗余或不相关特征的挑战,标准分类器在处理高维数据集时举步维艰。本文提出了一种基于 Mahalanobis 距离的新型特征选择方法,用于帕金森病(PD)分类。考虑到协方差结构,本文提出的特征选择方法通过测量特征与数据集平均向量的距离来识别相关特征。马哈拉诺比斯距离较大的特征被认为更相关,因为相对于数据集的分布,它们表现出更强的分辨力,有助于进行有效的特征子集选择。所有模型的分类性能都有显著提高。在 "帕金森病分类数据集 "上,特征集从 22 个特征减少到 11 个,准确率提高了 10.17% 到 20.34%,其中 K-近邻(KNN)分类器的准确率最高,达到 98.31%。同样,在 "具有重复声学特征的帕金森数据集 "上,特征集从 45 个特征减少到 18 个特征,准确率提高了 1.38 % 到 13.88 %,其中随机森林(RF)分类器的准确率最高,达到 95.83 %。通过识别收敛特征和消除发散特征,所提出的方法在保持或提高分类器性能的同时有效地降低了维度。此外,所提出的特征选择方法大大缩短了执行时间,因此非常适合医疗诊断领域的实时应用,因为及时准确的疾病识别对于改善患者预后至关重要。
{"title":"Feature selection based on Mahalanobis distance for early Parkinson disease classification","authors":"Mustafa Noaman Kadhim ,&nbsp;Dhiah Al-Shammary ,&nbsp;Ahmed M. Mahdi ,&nbsp;Ayman Ibaida","doi":"10.1016/j.cmpbup.2025.100177","DOIUrl":"10.1016/j.cmpbup.2025.100177","url":null,"abstract":"<div><div>Standard classifiers struggle with high-dimensional datasets due to increased computational complexity, difficulty in visualization and interpretation, and challenges in handling redundant or irrelevant features. This paper proposes a novel feature selection method based on the Mahalanobis distance for Parkinson's disease (PD) classification. The proposed feature selection identifies relevant features by measuring their distance from the dataset's mean vector, considering the covariance structure. Features with larger Mahalanobis distances are deemed more relevant as they exhibit greater discriminative power relative to the dataset's distribution, aiding in effective feature subset selection. Significant improvements in classification performance were observed across all models. On the \"Parkinson Disease Classification Dataset\", the feature set was reduced from 22 to 11 features, resulting in accuracy improvements ranging from 10.17 % to 20.34 %, with the K-Nearest Neighbors (KNN) classifier achieving the highest accuracy of 98.31 %. Similarly, on the \"Parkinson Dataset with Replicated Acoustic Features\", the feature set was reduced from 45 to 18 features, achieving accuracy improvements ranging from 1.38 % to 13.88 %, with the Random Forest (RF) classifier achieving the best accuracy of 95.83 %. By identifying convergence features and eliminating divergence features, the proposed method effectively reduces dimensionality while maintaining or improving classifier performance. Additionally, the proposed feature selection method significantly reduces execution time, making it highly suitable for real-time applications in medical diagnostics, where timely and accurate disease identification is critical for improving patient outcomes.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100177"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143179429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unifying heterogeneous hyperspectral databases for in vivo human brain cancer classification: Towards robust algorithm development 统一异构高光谱数据库,进行活体人类脑癌分类:实现稳健的算法开发
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2025.100183
Alberto Martín-Pérez , Beatriz Martinez-Vega , Manuel Villa , Raquel Leon , Alejandro Martinez de Ternero , Himar Fabelo , Samuel Ortega , Eduardo Quevedo , Gustavo M. Callico , Eduardo Juarez , César Sanz

Background and objective

Cancer is one of the leading causes of death worldwide, and early and accurate detection is crucial to improve patient outcomes. Differentiating between healthy and diseased brain tissue during surgery is particularly challenging. Hyperspectral imaging, combined with machine and deep learning algorithms, has shown promise for detecting brain cancer in vivo. The present study is distinguished by an analysis and comparison of the performance of various algorithms, with the objective of evaluating their efficacy in unifying hyperspectral databases obtained from different cameras. These databases include data collected from various hospitals using different hyperspectral instruments, which vary in spectral ranges, spatial and spectral resolution, as well as illumination conditions. The primary aim is to assess the performance of models that respond to the limited availability of in vivo human brain hyperspectral data. The classification of healthy tissue, tumors and blood vessels is achieved through the utilisation of different algorithms in two databases: HELICoiD and SLIMBRAIN.

Methods

This study evaluated conventional and deep learning methods (KNN, RF, SVM, 1D-DNN, 2D-CNN, Fast 3D-CNN, and a DRNN), and advanced classification frameworks (LIBRA and HELICoiD) using cross-validation on 16 and 26 patients from each database, respectively.

Results

For individual datasets,LIBRA achieved the highest sensitivity for tumor classification, with values of 38 %, 72 %, and 80 % on the SLIMBRAIN, HELICoiD (20 bands), and HELICoiD (128 bands) datasets, respectively. The HELICoiD framework yielded the best F1 Scores for tumor tissue, with values of 11 %, 45 %, and 53 % for the same datasets. For the Unified dataset, LIBRA obtained the best results identifying the tumor, with a 40 % of sensitivity and a 30 % of F1 Score.
{"title":"Unifying heterogeneous hyperspectral databases for in vivo human brain cancer classification: Towards robust algorithm development","authors":"Alberto Martín-Pérez ,&nbsp;Beatriz Martinez-Vega ,&nbsp;Manuel Villa ,&nbsp;Raquel Leon ,&nbsp;Alejandro Martinez de Ternero ,&nbsp;Himar Fabelo ,&nbsp;Samuel Ortega ,&nbsp;Eduardo Quevedo ,&nbsp;Gustavo M. Callico ,&nbsp;Eduardo Juarez ,&nbsp;César Sanz","doi":"10.1016/j.cmpbup.2025.100183","DOIUrl":"10.1016/j.cmpbup.2025.100183","url":null,"abstract":"<div><h3>Background and objective</h3><div>Cancer is one of the leading causes of death worldwide, and early and accurate detection is crucial to improve patient outcomes. Differentiating between healthy and diseased brain tissue during surgery is particularly challenging. Hyperspectral imaging, combined with machine and deep learning algorithms, has shown promise for detecting brain cancer <em>in vivo</em>. The present study is distinguished by an analysis and comparison of the performance of various algorithms, with the objective of evaluating their efficacy in unifying hyperspectral databases obtained from different cameras. These databases include data collected from various hospitals using different hyperspectral instruments, which vary in spectral ranges, spatial and spectral resolution, as well as illumination conditions. The primary aim is to assess the performance of models that respond to the limited availability of <em>in vivo</em> human brain hyperspectral data. The classification of healthy tissue, tumors and blood vessels is achieved through the utilisation of different algorithms in two databases: <em>HELICoiD</em> and <em>SLIMBRAIN</em>.</div></div><div><h3>Methods</h3><div>This study evaluated conventional and deep learning methods (<em>KNN, RF, SVM, 1D-DNN, 2D-CNN, Fast 3D-CNN,</em> and a <em>DRNN</em>), and advanced classification frameworks (<em>LIBRA</em> and <em>HELICoiD</em>) using cross-validation on 16 and 26 patients from each database, respectively.</div></div><div><h3>Results</h3><div>For individual datasets,<em>LIBRA</em> achieved the highest sensitivity for tumor classification, with values of 38 %, 72 %, and 80 % on the <em>SLIMBRAIN, HELICoiD</em> (20 bands), and <em>HELICoiD</em> (128 bands) datasets, respectively. The <em>HELICoiD</em> framework yielded the best <em>F1 Scores</em> for tumor tissue, with values of 11 %, 45 %, and 53 % for the same datasets. For the <em>Unified dataset, LIBRA</em> obtained the best results identifying the tumor, with a 40 % of sensitivity and a 30 % of <em>F1 Score</em>.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100183"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143429202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A computer-based method for the automatic identification of the dimensional features of human cervical vertebrae
Pub Date : 2025-01-01 DOI: 10.1016/j.cmpbup.2024.100175
Nicola Cappetti , Luca Di Angelo , Carlotta Fontana , Antonio Marzola

Background and objective

Accurately measuring cervical vertebrae dimensions is crucial for diagnosing conditions, planning surgeries, and studying morphological variations related to gender, age, and ethnicity. However, traditional manual measurement methods, due to their labour-intensive nature, time-consuming process, and susceptibility to operator variability, often fall short in providing the objectivity required for reliable measurements. This study addresses these limitations by introducing a novel computer-based method for automatically identifying the dimensional features of human cervical vertebrae, leveraging 3D geometric models obtained from CT or 3D scanning.

Methods

The proposed approach involves defining a local coordinate system and establishing a set of rules and parameters to evaluate the typical dimensional features of the vertebral body, foramen, and spinous process in the sagittal and coronal planes of the high-density point cloud of the cervical vertebra model. This system provides a consistent measurement reference frame, improving the method's reliability and objectivity. Based on this reference system, the method automates the traditional standard protocol, typically performed manually by radiologists, through an algorithmic approach.

Results

The performance of the computer-based method was compared with the traditional manual approach using a dataset of nine complete cervical tracts. Manual measurements were conducted following a defined protocol. The manual method demonstrated poor repeatability and reproducibility, with substantial differences between the minimum and maximum values for the measured features in intra- and inter-operator evaluations. In contrast, the measurements obtained with the proposed computer-based method were consistent and repeatable.

Conclusions

The proposed computer-based method provides a more reliable and objective approach for measuring the dimensional features of cervical vertebrae. It establishes a procedural standard for deducing the morphological characteristics of cervical vertebrae, with significant implications for clinical applications, such as surgical planning and diagnosis, as well as for forensic anthropology and spinal anatomy research. Further refinement and validation of the algorithmic rules and investigations into the influence of morphological abnormalities are necessary to improve the method's accuracy.
{"title":"A computer-based method for the automatic identification of the dimensional features of human cervical vertebrae","authors":"Nicola Cappetti ,&nbsp;Luca Di Angelo ,&nbsp;Carlotta Fontana ,&nbsp;Antonio Marzola","doi":"10.1016/j.cmpbup.2024.100175","DOIUrl":"10.1016/j.cmpbup.2024.100175","url":null,"abstract":"<div><h3>Background and objective</h3><div>Accurately measuring cervical vertebrae dimensions is crucial for diagnosing conditions, planning surgeries, and studying morphological variations related to gender, age, and ethnicity. However, traditional manual measurement methods, due to their labour-intensive nature, time-consuming process, and susceptibility to operator variability, often fall short in providing the objectivity required for reliable measurements. This study addresses these limitations by introducing a novel computer-based method for automatically identifying the dimensional features of human cervical vertebrae, leveraging 3D geometric models obtained from CT or 3D scanning.</div></div><div><h3>Methods</h3><div>The proposed approach involves defining a local coordinate system and establishing a set of rules and parameters to evaluate the typical dimensional features of the vertebral body, foramen, and spinous process in the sagittal and coronal planes of the high-density point cloud of the cervical vertebra model. This system provides a consistent measurement reference frame, improving the method's reliability and objectivity. Based on this reference system, the method automates the traditional standard protocol, typically performed manually by radiologists, through an algorithmic approach.</div></div><div><h3>Results</h3><div>The performance of the computer-based method was compared with the traditional manual approach using a dataset of nine complete cervical tracts. Manual measurements were conducted following a defined protocol. The manual method demonstrated poor repeatability and reproducibility, with substantial differences between the minimum and maximum values for the measured features in intra- and inter-operator evaluations. In contrast, the measurements obtained with the proposed computer-based method were consistent and repeatable.</div></div><div><h3>Conclusions</h3><div>The proposed computer-based method provides a more reliable and objective approach for measuring the dimensional features of cervical vertebrae. It establishes a procedural standard for deducing the morphological characteristics of cervical vertebrae, with significant implications for clinical applications, such as surgical planning and diagnosis, as well as for forensic anthropology and spinal anatomy research. Further refinement and validation of the algorithmic rules and investigations into the influence of morphological abnormalities are necessary to improve the method's accuracy.</div></div>","PeriodicalId":72670,"journal":{"name":"Computer methods and programs in biomedicine update","volume":"7 ","pages":"Article 100175"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143180352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine update
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1