首页 > 最新文献

Computers in biology and medicine最新文献

英文 中文
Artificial intelligence and deep learning algorithms for epigenetic sequence analysis: A review for epigeneticists and AI experts. 用于表观遗传序列分析的人工智能和深度学习算法:面向表观遗传学家和人工智能专家的综述。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-11-04 DOI: 10.1016/j.compbiomed.2024.109302
Muhammad Tahir, Mahboobeh Norouzi, Shehroz S Khan, James R Davie, Soichiro Yamanaka, Ahmed Ashraf

Epigenetics encompasses mechanisms that can alter the expression of genes without changing the underlying genetic sequence. The epigenetic regulation of gene expression is initiated and sustained by several mechanisms such as DNA methylation, histone modifications, chromatin conformation, and non-coding RNA. The changes in gene regulation and expression can manifest in the form of various diseases and disorders such as cancer and congenital deformities. Over the last few decades, high-throughput experimental approaches have been used to identify and understand epigenetic changes, but these laboratory experimental approaches and biochemical processes are time-consuming and expensive. To overcome these challenges, machine learning and artificial intelligence (AI) approaches have been extensively used for mapping epigenetic modifications to their phenotypic manifestations. In this paper we provide a narrative review of published research on AI models trained on epigenomic data to address a variety of problems such as prediction of disease markers, gene expression, enhancer-promoter interaction, and chromatin states. The purpose of this review is twofold as it is addressed to both AI experts and epigeneticists. For AI researchers, we provided a taxonomy of epigenetics research problems that can benefit from an AI-based approach. For epigeneticists, given each of the above problems we provide a list of candidate AI solutions in the literature. We have also identified several gaps in the literature, research challenges, and recommendations to address these challenges.

表观遗传学包括在不改变基本基因序列的情况下改变基因表达的机制。基因表达的表观遗传调控是由 DNA 甲基化、组蛋白修饰、染色质构象和非编码 RNA 等几种机制启动和维持的。基因调控和表达的变化可表现为各种疾病和失调,如癌症和先天性畸形。在过去的几十年里,高通量实验方法已被用于识别和了解表观遗传变化,但这些实验室实验方法和生化过程耗时且昂贵。为了克服这些挑战,机器学习和人工智能(AI)方法已被广泛用于映射表观遗传修饰及其表型表现。在本文中,我们对已发表的有关人工智能模型的研究进行了叙述性综述,这些模型是在表观基因组数据的基础上训练而成的,用于解决疾病标志物预测、基因表达、增强子-启动子相互作用和染色质状态等各种问题。本综述具有双重目的,既面向人工智能专家,也面向表观遗传学家。对于人工智能研究人员,我们提供了可从基于人工智能的方法中获益的表观遗传学研究问题分类法。对于表观遗传学家,我们针对上述每个问题提供了一份候选人工智能解决方案的文献列表。我们还指出了文献中的几个空白点、研究挑战以及应对这些挑战的建议。
{"title":"Artificial intelligence and deep learning algorithms for epigenetic sequence analysis: A review for epigeneticists and AI experts.","authors":"Muhammad Tahir, Mahboobeh Norouzi, Shehroz S Khan, James R Davie, Soichiro Yamanaka, Ahmed Ashraf","doi":"10.1016/j.compbiomed.2024.109302","DOIUrl":"10.1016/j.compbiomed.2024.109302","url":null,"abstract":"<p><p>Epigenetics encompasses mechanisms that can alter the expression of genes without changing the underlying genetic sequence. The epigenetic regulation of gene expression is initiated and sustained by several mechanisms such as DNA methylation, histone modifications, chromatin conformation, and non-coding RNA. The changes in gene regulation and expression can manifest in the form of various diseases and disorders such as cancer and congenital deformities. Over the last few decades, high-throughput experimental approaches have been used to identify and understand epigenetic changes, but these laboratory experimental approaches and biochemical processes are time-consuming and expensive. To overcome these challenges, machine learning and artificial intelligence (AI) approaches have been extensively used for mapping epigenetic modifications to their phenotypic manifestations. In this paper we provide a narrative review of published research on AI models trained on epigenomic data to address a variety of problems such as prediction of disease markers, gene expression, enhancer-promoter interaction, and chromatin states. The purpose of this review is twofold as it is addressed to both AI experts and epigeneticists. For AI researchers, we provided a taxonomy of epigenetics research problems that can benefit from an AI-based approach. For epigeneticists, given each of the above problems we provide a list of candidate AI solutions in the literature. We have also identified several gaps in the literature, research challenges, and recommendations to address these challenges.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109302"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142580972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating multimodal learning for improved vital health parameter estimation. 整合多模态学习,改进生命健康参数估计。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-10-16 DOI: 10.1016/j.compbiomed.2024.109104
Ashish Marisetty, Prathistith Raj Medi, Praneeth Nemani, Venkanna Udutalapally, Debanjan Das

Malnutrition poses a significant threat to global health, resulting from an inadequate intake of essential nutrients that adversely impacts vital organs and overall bodily functioning. Periodic examinations and mass screenings, incorporating both conventional and non-invasive techniques, have been employed to combat this challenge. However, these approaches suffer from critical limitations, such as the need for additional equipment, lack of comprehensive feature representation, absence of suitable health indicators, and the unavailability of smartphone implementations for precise estimations of Body Fat Percentage (BFP), Basal Metabolic Rate (BMR), and Body Mass Index (BMI) to enable efficient smart-malnutrition monitoring. To address these constraints, this study presents a groundbreaking, scalable, and robust smart malnutrition-monitoring system that leverages a single full-body image of an individual to estimate height, weight, and other crucial health parameters within a multi-modal learning framework. Our proposed methodology involves the reconstruction of a highly precise 3D point cloud, from which 512-dimensional feature embeddings are extracted using a headless-3D classification network. Concurrently, facial and body embeddings are also extracted, and through the application of learnable parameters, these features are then utilized to estimate weight accurately. Furthermore, essential health metrics, including BMR, BFP, and BMI, are computed to comprehensively analyze the subject's health, subsequently facilitating the provision of personalized nutrition plans. While being robust to a wide range of lighting conditions across multiple devices, our model achieves a low Mean Absolute Error (MAE) of ± 4.7 cm and ± 5.3 kg in estimating height and weight.

营养不良对全球健康构成重大威胁,其原因是必需营养素摄入不足,对重要器官和整体身体机能产生不利影响。为应对这一挑战,人们采用了常规和非侵入性技术进行定期检查和大规模筛查。然而,这些方法都存在严重的局限性,例如需要额外的设备、缺乏全面的特征表示、缺乏合适的健康指标,以及无法使用智能手机精确估算体脂率(BFP)、基础代谢率(BMR)和体重指数(BMI),从而无法实现高效的智能营养监测。为解决这些制约因素,本研究提出了一种开创性、可扩展且稳健的智能营养不良监测系统,该系统利用个人的单张全身图像,在多模态学习框架内估算身高、体重和其他关键健康参数。我们提出的方法包括重建高精度三维点云,并使用无头三维分类网络从中提取 512 维特征嵌入。同时,我们还提取了面部和身体嵌入,并通过应用可学习参数,利用这些特征来准确估计体重。此外,还计算了基本的健康指标,包括血压、血糖和体重指数,以全面分析受试者的健康状况,进而帮助提供个性化的营养计划。我们的模型对多种设备上的各种光照条件都很稳定,在估算身高和体重时,平均绝对误差(MAE)较低,分别为± 4.7 厘米和± 5.3 千克。
{"title":"Integrating multimodal learning for improved vital health parameter estimation.","authors":"Ashish Marisetty, Prathistith Raj Medi, Praneeth Nemani, Venkanna Udutalapally, Debanjan Das","doi":"10.1016/j.compbiomed.2024.109104","DOIUrl":"10.1016/j.compbiomed.2024.109104","url":null,"abstract":"<p><p>Malnutrition poses a significant threat to global health, resulting from an inadequate intake of essential nutrients that adversely impacts vital organs and overall bodily functioning. Periodic examinations and mass screenings, incorporating both conventional and non-invasive techniques, have been employed to combat this challenge. However, these approaches suffer from critical limitations, such as the need for additional equipment, lack of comprehensive feature representation, absence of suitable health indicators, and the unavailability of smartphone implementations for precise estimations of Body Fat Percentage (BFP), Basal Metabolic Rate (BMR), and Body Mass Index (BMI) to enable efficient smart-malnutrition monitoring. To address these constraints, this study presents a groundbreaking, scalable, and robust smart malnutrition-monitoring system that leverages a single full-body image of an individual to estimate height, weight, and other crucial health parameters within a multi-modal learning framework. Our proposed methodology involves the reconstruction of a highly precise 3D point cloud, from which 512-dimensional feature embeddings are extracted using a headless-3D classification network. Concurrently, facial and body embeddings are also extracted, and through the application of learnable parameters, these features are then utilized to estimate weight accurately. Furthermore, essential health metrics, including BMR, BFP, and BMI, are computed to comprehensively analyze the subject's health, subsequently facilitating the provision of personalized nutrition plans. While being robust to a wide range of lighting conditions across multiple devices, our model achieves a low Mean Absolute Error (MAE) of ± 4.7 cm and ± 5.3 kg in estimating height and weight.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109104"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142459865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Riemannian manifold-based geometric clustering of continuous glucose monitoring to improve personalized diabetes management. 基于黎曼流形的连续血糖监测几何聚类,改善个性化糖尿病管理。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-10-16 DOI: 10.1016/j.compbiomed.2024.109255
Jiafeng Song, Jocelyn McNeany, Yifei Wang, Tanicia Daley, Arlene Stecenko, Rishikesan Kamaleswaran

Background: Continuous Glucose Monitoring (CGM) provides a detailed representation of glucose fluctuations in individuals, offering a rich dataset for understanding glycemic control in diabetes management. This study explores the potential of Riemannian manifold-based geometric clustering to analyze and interpret CGM data for individuals with Type 1 Diabetes (T1D) and healthy controls (HC), aiming to enhance diabetes management and treatment personalization.

Methods: We utilized CGM data from publicly accessible datasets, covering both T1D individuals on insulin and HC. Data were segmented into daily intervals, from which 27 distinct glycemic features were extracted. Uniform Manifold Approximation and Projection (UMAP) was then applied to reduce dimensionality and visualize the data, with model performance validated through correlation analysis between Silhouette Score (SS) against HC cluster and HbA1c levels.

Results: UMAP effectively distinguished between T1D on daily insulin and HC groups, with data points clustering according to glycemic profiles. Moderate inverse correlations were observed between SS against HC cluster and HbA1c levels, supporting the clinical relevance of the UMAP-derived metric.

Conclusions: This study demonstrates the utility of UMAP in enhancing the analysis of CGM data for diabetes management. We revealed distinct clustering of glycemic profiles between healthy individuals and diabetics on daily insulin indicating that in most instances insulin does not restore a normal glycemic phenotype. In addition, the SS quantifies day by day the degree of this continued dysglycemia and therefore potentially offers a novel approach for personalized diabetes care.

背景:连续血糖监测(CGM)能详细反映个体的血糖波动,为了解糖尿病管理中的血糖控制提供了丰富的数据集。本研究探讨了基于黎曼流形的几何聚类在分析和解释 1 型糖尿病(T1D)患者和健康对照组(HC)的 CGM 数据方面的潜力,旨在加强糖尿病管理和治疗的个性化:我们利用公开数据集中的 CGM 数据,涵盖了使用胰岛素的 T1D 患者和健康对照组。数据被分割成每日间隔,从中提取出 27 个不同的血糖特征。然后应用统一模形逼近和投影(UMAP)来降低维度和可视化数据,并通过剪影评分(SS)与血糖仪群组和 HbA1c 水平之间的相关性分析来验证模型的性能:结果:UMAP 有效区分了每日使用胰岛素的 T1D 和 HC 组,数据点根据血糖特征进行聚类。在SS与HC组和HbA1c水平之间观察到了适度的反相关性,支持了UMAP衍生指标的临床相关性:本研究证明了 UMAP 在加强 CGM 数据分析以促进糖尿病管理方面的实用性。我们揭示了健康人与每日使用胰岛素的糖尿病患者之间血糖特征的不同聚类,这表明在大多数情况下,胰岛素并不能恢复正常的血糖表型。此外,SS 还能逐日量化这种持续性血糖异常的程度,因此有可能为个性化糖尿病护理提供一种新方法。
{"title":"Riemannian manifold-based geometric clustering of continuous glucose monitoring to improve personalized diabetes management.","authors":"Jiafeng Song, Jocelyn McNeany, Yifei Wang, Tanicia Daley, Arlene Stecenko, Rishikesan Kamaleswaran","doi":"10.1016/j.compbiomed.2024.109255","DOIUrl":"10.1016/j.compbiomed.2024.109255","url":null,"abstract":"<p><strong>Background: </strong>Continuous Glucose Monitoring (CGM) provides a detailed representation of glucose fluctuations in individuals, offering a rich dataset for understanding glycemic control in diabetes management. This study explores the potential of Riemannian manifold-based geometric clustering to analyze and interpret CGM data for individuals with Type 1 Diabetes (T1D) and healthy controls (HC), aiming to enhance diabetes management and treatment personalization.</p><p><strong>Methods: </strong>We utilized CGM data from publicly accessible datasets, covering both T1D individuals on insulin and HC. Data were segmented into daily intervals, from which 27 distinct glycemic features were extracted. Uniform Manifold Approximation and Projection (UMAP) was then applied to reduce dimensionality and visualize the data, with model performance validated through correlation analysis between Silhouette Score (SS) against HC cluster and HbA1c levels.</p><p><strong>Results: </strong>UMAP effectively distinguished between T1D on daily insulin and HC groups, with data points clustering according to glycemic profiles. Moderate inverse correlations were observed between SS against HC cluster and HbA1c levels, supporting the clinical relevance of the UMAP-derived metric.</p><p><strong>Conclusions: </strong>This study demonstrates the utility of UMAP in enhancing the analysis of CGM data for diabetes management. We revealed distinct clustering of glycemic profiles between healthy individuals and diabetics on daily insulin indicating that in most instances insulin does not restore a normal glycemic phenotype. In addition, the SS quantifies day by day the degree of this continued dysglycemia and therefore potentially offers a novel approach for personalized diabetes care.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109255"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142459879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning approaches for automated classification of neonatal lung ultrasound with assessment of human-to-AI interrater agreement. 用于新生儿肺部超声波自动分类的深度学习方法,并评估人与人工智能的交互一致性。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-11-05 DOI: 10.1016/j.compbiomed.2024.109315
Noreen Fatima, Umair Khan, Xi Han, Emanuela Zannin, Camilla Rigotti, Federico Cattaneo, Giulia Dognini, Maria Luisa Ventura, Libertario Demi

Neonatal respiratory disorders pose significant challenges in clinical settings, often requiring rapid and accurate diagnostic solutions for effective management. Lung ultrasound (LUS) has emerged as a promising tool to evaluate respiratory conditions in neonates. This evaluation is mainly based on the interpretation of visual patterns (horizontal artifacts, vertical artifacts, and consolidations). Automated interpretation of these patterns can assist clinicians in their evaluations. However, developing AI-based solutions for this purpose is challenging, primarily due to the lack of annotated data and inherent subjectivity in expert interpretations. This study aims to propose an automated solution for the reliable interpretation of patterns in LUS videos of newborns. We employed two distinct strategies. The first strategy is a frame-to-video-level approach that computes frame-level predictions from deep learning (DL) models trained from scratch (F2V-TS) along with fine-tuning pre-trained models (F2V-FT) followed by aggregation of those predictions for video-level evaluation. The second strategy is a direct video classification approach (DV) for evaluating LUS data. To evaluate our methods, we used LUS data from 34 neonatal patients comprising of 70 exams with annotations provided by three expert human operators (3HOs). Results show that within the frame-to-video-level approach, F2V-FT achieved the best performance with an accuracy of 77% showing moderate agreement with the 3HOs. while the direct video classification approach resulted in an accuracy of 72%, showing substantial agreement with the 3HOs, our proposed study lays down the foundation for reliable AI-based solutions for newborn LUS data evaluation.

新生儿呼吸系统疾病给临床带来了巨大挑战,通常需要快速准确的诊断方法才能进行有效治疗。肺部超声(LUS)已成为评估新生儿呼吸系统状况的一种很有前途的工具。这种评估主要基于对视觉模式(水平伪影、垂直伪影和合并)的解读。对这些模式的自动解读可以帮助临床医生进行评估。然而,为此目的开发基于人工智能的解决方案具有挑战性,这主要是由于缺乏注释数据以及专家解读固有的主观性。本研究旨在为可靠解读新生儿 LUS 视频中的模式提出一种自动化解决方案。我们采用了两种不同的策略。第一种策略是帧到视频级别的方法,通过从头开始训练的深度学习(DL)模型(F2V-TS)计算帧级别的预测,同时对预训练模型进行微调(F2V-FT),然后将这些预测汇总,进行视频级别的评估。第二种策略是直接采用视频分类方法 (DV) 评估 LUS 数据。为了评估我们的方法,我们使用了 34 名新生儿患者的 LUS 数据,其中包括 70 项检查,并由三名人类操作专家(3HOs)提供注释。结果表明,在帧到视频级方法中,F2V-FT 取得了最佳性能,准确率为 77%,显示出与 3HOs 的适度一致性。而直接视频分类方法的准确率为 72%,显示出与 3HOs 的实质性一致性,我们提出的研究为新生儿 LUS 数据评估的可靠人工智能解决方案奠定了基础。
{"title":"Deep learning approaches for automated classification of neonatal lung ultrasound with assessment of human-to-AI interrater agreement.","authors":"Noreen Fatima, Umair Khan, Xi Han, Emanuela Zannin, Camilla Rigotti, Federico Cattaneo, Giulia Dognini, Maria Luisa Ventura, Libertario Demi","doi":"10.1016/j.compbiomed.2024.109315","DOIUrl":"10.1016/j.compbiomed.2024.109315","url":null,"abstract":"<p><p>Neonatal respiratory disorders pose significant challenges in clinical settings, often requiring rapid and accurate diagnostic solutions for effective management. Lung ultrasound (LUS) has emerged as a promising tool to evaluate respiratory conditions in neonates. This evaluation is mainly based on the interpretation of visual patterns (horizontal artifacts, vertical artifacts, and consolidations). Automated interpretation of these patterns can assist clinicians in their evaluations. However, developing AI-based solutions for this purpose is challenging, primarily due to the lack of annotated data and inherent subjectivity in expert interpretations. This study aims to propose an automated solution for the reliable interpretation of patterns in LUS videos of newborns. We employed two distinct strategies. The first strategy is a frame-to-video-level approach that computes frame-level predictions from deep learning (DL) models trained from scratch (F2V-TS) along with fine-tuning pre-trained models (F2V-FT) followed by aggregation of those predictions for video-level evaluation. The second strategy is a direct video classification approach (DV) for evaluating LUS data. To evaluate our methods, we used LUS data from 34 neonatal patients comprising of 70 exams with annotations provided by three expert human operators (3HOs). Results show that within the frame-to-video-level approach, F2V-FT achieved the best performance with an accuracy of 77% showing moderate agreement with the 3HOs. while the direct video classification approach resulted in an accuracy of 72%, showing substantial agreement with the 3HOs, our proposed study lays down the foundation for reliable AI-based solutions for newborn LUS data evaluation.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109315"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive enhanced human memory algorithm for multi-level image segmentation for pathological lung cancer images. 用于病理肺癌图像多层次分割的自适应增强人类记忆算法。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-10-16 DOI: 10.1016/j.compbiomed.2024.109272
Mahmoud Abdel-Salam, Essam H Houssein, Marwa M Emam, Nagwan Abdel Samee, Mona M Jamjoom, Gang Hu

Lung cancer is a critical health issue that demands swift and accurate diagnosis for effective treatment. In medical imaging, segmentation is crucial for identifying and isolating regions of interest, which is essential for precise diagnosis and treatment planning. Traditional metaheuristic-based segmentation methods often struggle with slow convergence speed, poor optimized thresholds results, balancing exploration and exploitation, leading to suboptimal performance in the multi-thresholding segmenting of lung cancer images. This study presents ASG-HMO, an enhanced variant of the Human Memory Optimization (HMO) algorithm, selected for its simplicity, versatility, and minimal parameters. Although HMO has never been applied to multi-thresholding image segmentation, its characteristics make it ideal to improve pathology lung cancer image segmentation. The ASG-HMO incorporating four innovative strategies that address key challenges in the segmentation process. Firstly, the enhanced adaptive mutualism phase is proposed to balance exploration and exploitation to accurately delineate tumor boundaries without getting trapped in suboptimal solutions. Second, the spiral motion strategy is utilized to adaptively refines segmentation solutions by focusing on both the overall lung structure and the intricate tumor details. Third, the gaussian mutation strategy introduces diversity in the search process, enabling the exploration of a broader range of segmentation thresholds to enhance the accuracy of segmented regions. Finally, the adaptive t-distribution disturbance strategy is proposed to help the algorithm avoid local optima and refine segmentation in later stages. The effectiveness of ASG-HMO is validated through rigorous testing on the IEEE CEC'17 and CEC'20 benchmark suites, followed by its application to multilevel thresholding segmentation in nine histopathology lung cancer images. In these experiments, six different segmentation thresholds were tested, and the algorithm was compared to several classical, recent, and advanced segmentation algorithms. In addition, the proposed ASG-HMO leverages 2D Renyi entropy and 2D histograms to enhance the precision of the segmentation process. Quantitative result analysis in pathological lung cancer segmentation showed that ASG-HMO achieved superior maximum Peak Signal-to-Noise Ratio (PSNR) of 31.924, Structural Similarity Index Measure (SSIM) of 0.919, Feature Similarity Index Measure (FSIM) of 0.990, and Probability Rand Index (PRI) of 0.924. These results indicate that ASG-HMO significantly outperforms existing algorithms in both convergence speed and segmentation accuracy. This demonstrates the robustness of ASG-HMO as a framework for precise segmentation of pathological lung cancer images, offering substantial potential for improving clinical diagnostic processes.

肺癌是一个严重的健康问题,需要迅速准确的诊断才能有效治疗。在医学成像中,分割是识别和隔离感兴趣区域的关键,这对精确诊断和治疗计划至关重要。传统的基于元启发式的分割方法往往存在收敛速度慢、优化阈值效果差、探索与利用不平衡等问题,导致其在肺癌图像的多阈值分割中表现不佳。本研究提出了 ASG-HMO,它是人类记忆优化算法(HMO)的增强变体,因其简单、通用和参数最小而被选中。虽然 HMO 从未应用于多阈值图像分割,但其特性使其成为改进病理肺癌图像分割的理想选择。ASG-HMO 融合了四种创新策略,可解决分割过程中的关键难题。首先,提出了增强型自适应相互性阶段,以平衡探索和利用,准确划分肿瘤边界,而不会陷入次优解。其次,利用螺旋运动策略,通过同时关注整体肺部结构和错综复杂的肿瘤细节,自适应地完善分割解决方案。第三,高斯突变策略在搜索过程中引入了多样性,从而能够探索更广泛的分割阈值,提高分割区域的准确性。最后,提出了自适应 t 分布干扰策略,以帮助算法避免局部最优,并在后期细化分割。通过在 IEEE CEC'17 和 CEC'20 基准套件上的严格测试,ASG-HMO 的有效性得到了验证,随后将其应用于九张组织病理学肺癌图像的多级阈值分割。在这些实验中,测试了六种不同的分割阈值,并将该算法与几种经典、最新和先进的分割算法进行了比较。此外,所提出的 ASG-HMO 利用二维仁义熵和二维直方图来提高分割过程的精确度。病理肺癌分割的定量结果分析表明,ASG-HMO 的峰值信噪比(PSNR)为 31.924,结构相似性指数(SSIM)为 0.919,特征相似性指数(FSIM)为 0.990,概率兰德指数(PRI)为 0.924。这些结果表明,ASG-HMO 在收敛速度和分割准确性方面都明显优于现有算法。这证明了 ASG-HMO 作为病理肺癌图像精确分割框架的稳健性,为改善临床诊断过程提供了巨大潜力。
{"title":"An adaptive enhanced human memory algorithm for multi-level image segmentation for pathological lung cancer images.","authors":"Mahmoud Abdel-Salam, Essam H Houssein, Marwa M Emam, Nagwan Abdel Samee, Mona M Jamjoom, Gang Hu","doi":"10.1016/j.compbiomed.2024.109272","DOIUrl":"10.1016/j.compbiomed.2024.109272","url":null,"abstract":"<p><p>Lung cancer is a critical health issue that demands swift and accurate diagnosis for effective treatment. In medical imaging, segmentation is crucial for identifying and isolating regions of interest, which is essential for precise diagnosis and treatment planning. Traditional metaheuristic-based segmentation methods often struggle with slow convergence speed, poor optimized thresholds results, balancing exploration and exploitation, leading to suboptimal performance in the multi-thresholding segmenting of lung cancer images. This study presents ASG-HMO, an enhanced variant of the Human Memory Optimization (HMO) algorithm, selected for its simplicity, versatility, and minimal parameters. Although HMO has never been applied to multi-thresholding image segmentation, its characteristics make it ideal to improve pathology lung cancer image segmentation. The ASG-HMO incorporating four innovative strategies that address key challenges in the segmentation process. Firstly, the enhanced adaptive mutualism phase is proposed to balance exploration and exploitation to accurately delineate tumor boundaries without getting trapped in suboptimal solutions. Second, the spiral motion strategy is utilized to adaptively refines segmentation solutions by focusing on both the overall lung structure and the intricate tumor details. Third, the gaussian mutation strategy introduces diversity in the search process, enabling the exploration of a broader range of segmentation thresholds to enhance the accuracy of segmented regions. Finally, the adaptive t-distribution disturbance strategy is proposed to help the algorithm avoid local optima and refine segmentation in later stages. The effectiveness of ASG-HMO is validated through rigorous testing on the IEEE CEC'17 and CEC'20 benchmark suites, followed by its application to multilevel thresholding segmentation in nine histopathology lung cancer images. In these experiments, six different segmentation thresholds were tested, and the algorithm was compared to several classical, recent, and advanced segmentation algorithms. In addition, the proposed ASG-HMO leverages 2D Renyi entropy and 2D histograms to enhance the precision of the segmentation process. Quantitative result analysis in pathological lung cancer segmentation showed that ASG-HMO achieved superior maximum Peak Signal-to-Noise Ratio (PSNR) of 31.924, Structural Similarity Index Measure (SSIM) of 0.919, Feature Similarity Index Measure (FSIM) of 0.990, and Probability Rand Index (PRI) of 0.924. These results indicate that ASG-HMO significantly outperforms existing algorithms in both convergence speed and segmentation accuracy. This demonstrates the robustness of ASG-HMO as a framework for precise segmentation of pathological lung cancer images, offering substantial potential for improving clinical diagnostic processes.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109272"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142459858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The efficient classification of breast cancer on low-power IoT devices: A study on genetically evolved U-Net. 低功耗物联网设备上的乳腺癌高效分类:基因进化 U-Net 研究
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-11-04 DOI: 10.1016/j.compbiomed.2024.109296
Mohit Agarwal, Amit Kumar Dwivedi, Dibyanarayan Hazra, Preeti Sharma, Suneet Kumar Gupta, Deepak Garg

Breast cancer is the most common cancer among women, and in some cases, it also affects men. Since early detection allows for proper treatment, automated data classification is essential. Although such classifications provide timely results, the resource requirements for such models, i.e., computation and storage, are high. As a result, these models are not suitable for resource-constrained devices (for example, IOT). In this work, we highlight the U-Net model, and to deploy it to IOT devices, we compress the same model using a genetic algorithm. We assess the proposed method using a publicly accessible, bench-marked dataset. To verify the efficacy of the suggested methodology, we conducted experiments on two more datasets, specifically CamVid and Potato leaf disease. In addition, we used the suggested method to shrink the MiniSegNet and FCN 32 models, which shows that the compressed U-Net approach works for classifying breast cancer. The results of the study indicate a significant decrease in the storage capacity of UNet with 96.12% compression for the breast cancer dataset with 1.97x enhancement in inference time. However, after compression of the model, there is a drop in accuracy of only 1.33%.

乳腺癌是女性最常见的癌症,在某些情况下也会影响男性。由于早期发现可以进行适当的治疗,因此自动数据分类至关重要。虽然这类分类能及时提供结果,但这类模型对计算和存储等资源的要求很高。因此,这些模型不适合资源有限的设备(如物联网)。在这项工作中,我们重点介绍了 U-Net 模型,为了将其部署到物联网设备上,我们使用遗传算法压缩了相同的模型。我们使用一个可公开访问的标杆数据集来评估所提出的方法。为了验证建议方法的有效性,我们在另外两个数据集上进行了实验,特别是 CamVid 和马铃薯叶病。此外,我们还使用所建议的方法缩小了 MiniSegNet 和 FCN 32 模型,这表明压缩 U-Net 方法对乳腺癌分类有效。研究结果表明,乳腺癌数据集的 UNet 存储容量大幅减少,压缩率为 96.12%,推理时间增加了 1.97 倍。然而,模型压缩后,准确率仅下降了 1.33%。
{"title":"The efficient classification of breast cancer on low-power IoT devices: A study on genetically evolved U-Net.","authors":"Mohit Agarwal, Amit Kumar Dwivedi, Dibyanarayan Hazra, Preeti Sharma, Suneet Kumar Gupta, Deepak Garg","doi":"10.1016/j.compbiomed.2024.109296","DOIUrl":"10.1016/j.compbiomed.2024.109296","url":null,"abstract":"<p><p>Breast cancer is the most common cancer among women, and in some cases, it also affects men. Since early detection allows for proper treatment, automated data classification is essential. Although such classifications provide timely results, the resource requirements for such models, i.e., computation and storage, are high. As a result, these models are not suitable for resource-constrained devices (for example, IOT). In this work, we highlight the U-Net model, and to deploy it to IOT devices, we compress the same model using a genetic algorithm. We assess the proposed method using a publicly accessible, bench-marked dataset. To verify the efficacy of the suggested methodology, we conducted experiments on two more datasets, specifically CamVid and Potato leaf disease. In addition, we used the suggested method to shrink the MiniSegNet and FCN 32 models, which shows that the compressed U-Net approach works for classifying breast cancer. The results of the study indicate a significant decrease in the storage capacity of UNet with 96.12% compression for the breast cancer dataset with 1.97x enhancement in inference time. However, after compression of the model, there is a drop in accuracy of only 1.33%.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109296"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142581370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A conflict-free multi-modal fusion network with spatial reinforcement transformers for brain tumor segmentation. 用于脑肿瘤分割的无冲突多模态融合网络与空间增强变换器。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-11-05 DOI: 10.1016/j.compbiomed.2024.109331
Tianyun Hu, Hongqing Zhu, Ziying Wang, Ning Chen, Bingcang Huang, Weiping Lu, Ying Wang

Brain gliomas are a leading cause of cancer mortality worldwide. Existing glioma segmentation approaches using multi-modal inputs often rely on a simplistic approach of stacking images from all modalities, disregarding modality-specific features that could optimize diagnostic outcomes. This paper introduces STE-Net, a spatial reinforcement hybrid Transformer-based tri-branch multi-modal evidential fusion network designed for conflict-free brain tumor segmentation. STE-Net features two independent encoder-decoder branches that process distinct modality sets, along with an additional branch that integrates features through a cross-modal channel-wise fusion (CMCF) module. The encoder employs a spatial reinforcement hybrid Transformer (SRHT), which combines a Swin Transformer block and a modified convolution block to capture richer spatial information. At the output level, a conflict-free evidential fusion mechanism (CEFM) is developed, leveraging the Dempster-Shafer (D-S) evidence theory and a conflict-solving strategy within a complex network framework. This mechanism ensures balanced reliability among the three output heads and mitigates potential conflicts. Each output is treated as a node in the complex network, and its importance is reassessed through the computation of direct and indirect weights to prevent potential mutual conflicts. We evaluate STE-Net on three public datasets: BraTS2018, BraTS2019, and BraTS2021. Both qualitative and quantitative results demonstrate that STE-Net outperforms several state-of-the-art methods. Statistical analysis further confirms the strong correlation between predicted tumors and ground truth. The code for this project is available at https://github.com/whotwin/STE-Net.

脑胶质瘤是全球癌症死亡的主要原因。现有的使用多模态输入的胶质瘤分割方法往往依赖于将所有模态的图像堆叠在一起的简单方法,而忽略了可优化诊断结果的特定模态特征。本文介绍的 STE-Net 是一种基于空间增强混合变压器的三分支多模态证据融合网络,设计用于无冲突脑肿瘤分割。STE-Net 有两个独立的编码器-解码器分支,分别处理不同的模态集,另外还有一个分支通过跨模态信道融合(CMCF)模块整合特征。编码器采用空间增强混合变换器(SRHT),将斯温变换器模块和改进的卷积模块相结合,以捕捉更丰富的空间信息。在输出层面,利用 Dempster-Shafer(D-S)证据理论和复杂网络框架内的冲突解决策略,开发了无冲突证据融合机制(CEFM)。该机制确保了三个输出头之间的平衡可靠性,并缓解了潜在冲突。每个输出都被视为复杂网络中的一个节点,通过计算直接和间接权重来重新评估其重要性,以防止潜在的相互冲突。我们在三个公共数据集上对 STE-Net 进行了评估:BraTS2018、BraTS2019 和 BraTS2021。定性和定量结果都表明,STE-Net 优于几种最先进的方法。统计分析进一步证实了预测肿瘤与地面实况之间的强相关性。该项目的代码见 https://github.com/whotwin/STE-Net。
{"title":"A conflict-free multi-modal fusion network with spatial reinforcement transformers for brain tumor segmentation.","authors":"Tianyun Hu, Hongqing Zhu, Ziying Wang, Ning Chen, Bingcang Huang, Weiping Lu, Ying Wang","doi":"10.1016/j.compbiomed.2024.109331","DOIUrl":"10.1016/j.compbiomed.2024.109331","url":null,"abstract":"<p><p>Brain gliomas are a leading cause of cancer mortality worldwide. Existing glioma segmentation approaches using multi-modal inputs often rely on a simplistic approach of stacking images from all modalities, disregarding modality-specific features that could optimize diagnostic outcomes. This paper introduces STE-Net, a spatial reinforcement hybrid Transformer-based tri-branch multi-modal evidential fusion network designed for conflict-free brain tumor segmentation. STE-Net features two independent encoder-decoder branches that process distinct modality sets, along with an additional branch that integrates features through a cross-modal channel-wise fusion (CMCF) module. The encoder employs a spatial reinforcement hybrid Transformer (SRHT), which combines a Swin Transformer block and a modified convolution block to capture richer spatial information. At the output level, a conflict-free evidential fusion mechanism (CEFM) is developed, leveraging the Dempster-Shafer (D-S) evidence theory and a conflict-solving strategy within a complex network framework. This mechanism ensures balanced reliability among the three output heads and mitigates potential conflicts. Each output is treated as a node in the complex network, and its importance is reassessed through the computation of direct and indirect weights to prevent potential mutual conflicts. We evaluate STE-Net on three public datasets: BraTS2018, BraTS2019, and BraTS2021. Both qualitative and quantitative results demonstrate that STE-Net outperforms several state-of-the-art methods. Statistical analysis further confirms the strong correlation between predicted tumors and ground truth. The code for this project is available at https://github.com/whotwin/STE-Net.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109331"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simulation of hip bony range of motion (BROM) corresponds to the observed functional range of motion (FROM) for pure flexion, internal rotation in deep flexion, and external rotation in minimal flexion-extension - A cadaver study. 模拟髋关节骨性运动范围(BROM)与观察到的纯屈、深屈内旋和微屈伸外旋的功能性运动范围(FROM)相对应--一项尸体研究。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-11-05 DOI: 10.1016/j.compbiomed.2024.109270
Arnab Palit, Mark A Williams, Ercihan Kiraci, Vineet Seemala, Vatsal Gupta, Jim Pierrepont, Christopher Plaskos, Richard King

Background: The study investigated the relationship between computed bony range of motion (BROM) and actual functional range of motion (FROM) as directly measured in cadaveric hips. The hypothesis was that some hip movements are not substantially restricted by soft tissues, and therefore, computed BROM for these movements may effectively represent FROM, providing a reliable parameter for computational pre-operative planning.

Methods: Maximum passive FROM was measured in nine cadaveric hips using optical tracking. Each hip was measured in at least ninety FROM positions, covering flexion, extension, abduction, flexion-internal rotation (IR), flexion-external rotation (ER), extension-IR, and extension-ER movements. The measured FROM was virtually recreated using 3D models of the femur and pelvis derived from CT scans, and the corresponding BROM was computed. The relationship between FROM and BROM was classified into three groups: close (mean difference<5°), moderate (mean difference 5-15°), and weak (mean difference>15°).

Results: The relationship between FROM and BROM was close for pure flexion (difference = 3.1° ± 3.9°) and IR in deep (>70°) flexion (difference = 4.3° ± 4.6°). The relationship was moderate for ER in minimal flexion (difference = 10.3° ± 5.8°) and ER in minimal extension (difference = 11.7° ± 7.2°). Bony impingement was observed in some cases during these movements. Other movements showed a weak relationship: large differences were observed in extension (51.9° ± 14.4°), abduction (18.6° ± 11.3°), flexion-IR at flexion<70° (37.1° ± 9.4°), extension-IR (79.6° ± 4.8°), flexion-ER at flexion>30° (45.9° ± 11.3°), and extension-ER at extension>20° (15.8° ± 4.8°).

Conclusion: BROM simulations of hip flexion, IR in deep flexion, and ER in low flexion/extension may be useful in dynamic pre-operative planning of total hip arthroplasty.

背景:该研究调查了计算出的骨性活动范围(BROM)与在尸体髋关节中直接测量出的实际功能性活动范围(FROM)之间的关系。假设某些髋关节运动不会受到软组织的实质性限制,因此这些运动的计算骨性活动范围可有效代表功能性活动范围,为计算术前计划提供可靠的参数:方法:使用光学跟踪测量九个尸体髋关节的最大被动 FROM。每个髋关节至少在九十个 FROM 位置进行了测量,包括屈曲、伸展、外展、屈曲-内旋(IR)、屈曲-外旋(ER)、伸展-IR 和伸展-ER 运动。测量出的FROM通过CT扫描获得的股骨和骨盆三维模型进行虚拟再现,并计算出相应的BROM。FROM和BROM之间的关系分为三组:接近组(平均相差15°):结果:纯屈曲(差值=3.1° ± 3.9°)和深屈曲(>70°)时的IR(差值=4.3° ± 4.6°)的FROM和BROM关系密切。最小屈曲时的ER(差异= 10.3° ± 5.8°)和最小伸展时的ER(差异= 11.7° ± 7.2°)关系适中。在这些运动中,有些病例出现了骨性撞击。其他动作显示出微弱的关系:在伸展(51.9°±14.4°)、外展(18.6°±11.3°)、屈曲 30°时的屈曲-IR(45.9°±11.3°)和伸展>20°时的伸展-ER(15.8°±4.8°)中观察到较大差异:BROM模拟髋关节屈曲、深屈时的IR和低屈/伸展时的ER可能有助于全髋关节置换术的动态术前规划。
{"title":"Simulation of hip bony range of motion (BROM) corresponds to the observed functional range of motion (FROM) for pure flexion, internal rotation in deep flexion, and external rotation in minimal flexion-extension - A cadaver study.","authors":"Arnab Palit, Mark A Williams, Ercihan Kiraci, Vineet Seemala, Vatsal Gupta, Jim Pierrepont, Christopher Plaskos, Richard King","doi":"10.1016/j.compbiomed.2024.109270","DOIUrl":"10.1016/j.compbiomed.2024.109270","url":null,"abstract":"<p><strong>Background: </strong>The study investigated the relationship between computed bony range of motion (BROM) and actual functional range of motion (FROM) as directly measured in cadaveric hips. The hypothesis was that some hip movements are not substantially restricted by soft tissues, and therefore, computed BROM for these movements may effectively represent FROM, providing a reliable parameter for computational pre-operative planning.</p><p><strong>Methods: </strong>Maximum passive FROM was measured in nine cadaveric hips using optical tracking. Each hip was measured in at least ninety FROM positions, covering flexion, extension, abduction, flexion-internal rotation (IR), flexion-external rotation (ER), extension-IR, and extension-ER movements. The measured FROM was virtually recreated using 3D models of the femur and pelvis derived from CT scans, and the corresponding BROM was computed. The relationship between FROM and BROM was classified into three groups: close (mean difference<5°), moderate (mean difference 5-15°), and weak (mean difference>15°).</p><p><strong>Results: </strong>The relationship between FROM and BROM was close for pure flexion (difference = 3.1° ± 3.9°) and IR in deep (>70°) flexion (difference = 4.3° ± 4.6°). The relationship was moderate for ER in minimal flexion (difference = 10.3° ± 5.8°) and ER in minimal extension (difference = 11.7° ± 7.2°). Bony impingement was observed in some cases during these movements. Other movements showed a weak relationship: large differences were observed in extension (51.9° ± 14.4°), abduction (18.6° ± 11.3°), flexion-IR at flexion<70° (37.1° ± 9.4°), extension-IR (79.6° ± 4.8°), flexion-ER at flexion>30° (45.9° ± 11.3°), and extension-ER at extension>20° (15.8° ± 4.8°).</p><p><strong>Conclusion: </strong>BROM simulations of hip flexion, IR in deep flexion, and ER in low flexion/extension may be useful in dynamic pre-operative planning of total hip arthroplasty.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109270"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The untapped potential of 3D virtualization using high resolution scanner-based and photogrammetry technologies for bone bank digital modeling. 利用基于高分辨率扫描仪和摄影测量技术的三维虚拟化技术在骨库数字建模方面尚未开发的潜力。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-11-05 DOI: 10.1016/j.compbiomed.2024.109340
Anuar Giménez-El-Amrani, Andres Sanz-Garcia, Néstor Villalba-Rojas, Vicente Mirabet, Alfonso Valverde-Navarro, Carmen Escobedo-Lucea

Three-dimensional (3D) scanning technologies could transform medical practices by creating virtual tissue banks. In bone transplantation, new approaches are needed to provide surgeons with accurate tissue measurements while minimizing contamination risks and avoiding repeated freeze-thaw cycles of banked tissues. This study evaluates three prominent non-contact 3D scanning methods-structured light scanning (SLG), laser scanning (LAS), and photogrammetry (PHG)-to support tissue banking operations. We conducted a thorough examination of each technology and the precision of the 3D scanned bones using relevant anatomical specimens under sterile conditions. Cranial caps were scanned as separate inner and outer surfaces, automatically aligned, and merged with post-processing. A colorimetric analysis based on CIEDE2000 was performed, and the results were compared with questionnaires distributed among neurosurgeons. The findings indicate that certain 3D scanning methods were more appropriate for specific bones. Among the technologies, SLG emerged as optimal for tissue banking, offering a superior balance of accuracy, minimal distortion, cost-efficiency, and ease of use. All methods slightly underestimated the volume of the specimens in their virtual models. According to the colorimetric analysis and the questionnaires given to the neurosurgeons, our low-cost PHG system performed better than others in capturing cranial caps, although it exhibited the least dimensional accuracy. In conclusion, this study provides valuable insights for surgeons and tissue bank personnel in selecting the most efficient 3D non-contact scanning technology and optimizing protocols for modernized tissue banking. Future work will advance towards smart healthcare solutions, explore the development of virtual tissue banks.

三维(3D)扫描技术可以创建虚拟组织库,从而改变医疗实践。在骨移植手术中,需要采用新的方法为外科医生提供精确的组织测量,同时最大限度地降低污染风险,并避免重复冻融循环库中的组织。本研究评估了三种著名的非接触式三维扫描方法--结构光扫描 (SLG)、激光扫描 (LAS) 和摄影测量 (PHG)--以支持组织库操作。我们在无菌条件下使用相关解剖标本对每种技术和三维扫描骨骼的精度进行了全面检查。颅盖作为独立的内表面和外表面进行扫描,自动对齐,并通过后处理进行合并。根据 CIEDE2000 进行了比色分析,并将结果与神经外科医生发放的调查问卷进行了比较。研究结果表明,某些三维扫描方法更适合特定的骨骼。在这些技术中,SLG 是组织库的最佳选择,它在准确性、最小失真、成本效益和易用性之间取得了极佳的平衡。所有方法都略微低估了虚拟模型中标本的体积。根据比色分析和对神经外科医生的问卷调查,我们的低成本 PHG 系统在捕捉颅盖方面的表现优于其他系统,尽管它的尺寸精度最低。总之,这项研究为外科医生和组织库工作人员选择最有效的三维非接触扫描技术和优化现代化组织库的规程提供了宝贵的见解。未来的工作将推进智能医疗解决方案,探索虚拟组织库的发展。
{"title":"The untapped potential of 3D virtualization using high resolution scanner-based and photogrammetry technologies for bone bank digital modeling.","authors":"Anuar Giménez-El-Amrani, Andres Sanz-Garcia, Néstor Villalba-Rojas, Vicente Mirabet, Alfonso Valverde-Navarro, Carmen Escobedo-Lucea","doi":"10.1016/j.compbiomed.2024.109340","DOIUrl":"10.1016/j.compbiomed.2024.109340","url":null,"abstract":"<p><p>Three-dimensional (3D) scanning technologies could transform medical practices by creating virtual tissue banks. In bone transplantation, new approaches are needed to provide surgeons with accurate tissue measurements while minimizing contamination risks and avoiding repeated freeze-thaw cycles of banked tissues. This study evaluates three prominent non-contact 3D scanning methods-structured light scanning (SLG), laser scanning (LAS), and photogrammetry (PHG)-to support tissue banking operations. We conducted a thorough examination of each technology and the precision of the 3D scanned bones using relevant anatomical specimens under sterile conditions. Cranial caps were scanned as separate inner and outer surfaces, automatically aligned, and merged with post-processing. A colorimetric analysis based on CIEDE2000 was performed, and the results were compared with questionnaires distributed among neurosurgeons. The findings indicate that certain 3D scanning methods were more appropriate for specific bones. Among the technologies, SLG emerged as optimal for tissue banking, offering a superior balance of accuracy, minimal distortion, cost-efficiency, and ease of use. All methods slightly underestimated the volume of the specimens in their virtual models. According to the colorimetric analysis and the questionnaires given to the neurosurgeons, our low-cost PHG system performed better than others in capturing cranial caps, although it exhibited the least dimensional accuracy. In conclusion, this study provides valuable insights for surgeons and tissue bank personnel in selecting the most efficient 3D non-contact scanning technology and optimizing protocols for modernized tissue banking. Future work will advance towards smart healthcare solutions, explore the development of virtual tissue banks.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109340"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques. 胃癌中的变革性人工智能:诊断技术的进步。
IF 7 2区 医学 Q1 BIOLOGY Pub Date : 2024-12-01 Epub Date: 2024-11-01 DOI: 10.1016/j.compbiomed.2024.109261
Mobina Khosravi, Seyedeh Kimia Jasemi, Parsa Hayati, Hamid Akbari Javar, Saadat Izadi, Zhila Izadi

Gastric cancer represents a significant global health challenge with elevated incidence and mortality rates, highlighting the need for advancements in diagnostic and therapeutic strategies. This review paper addresses the critical need for a thorough synthesis of the role of artificial intelligence (AI) in the management of gastric cancer. It provides an in-depth analysis of current AI applications, focusing on their contributions to early diagnosis, treatment planning, and outcome prediction. The review identifies key gaps and limitations in the existing literature by examining recent studies and technological developments. It aims to clarify the evolution of AI-driven methods and their impact on enhancing diagnostic accuracy, personalizing treatment strategies, and improving patient outcomes. The paper emphasizes the transformative potential of AI in overcoming the challenges associated with gastric cancer management and proposes future research directions to further harness AI's capabilities. Through this synthesis, the review underscores the importance of integrating AI technologies into clinical practice to revolutionize gastric cancer management.

胃癌是一项重大的全球性健康挑战,其发病率和死亡率均有所上升,这凸显了在诊断和治疗策略方面取得进展的必要性。本综述论文针对人工智能(AI)在胃癌治疗中的作用这一关键需求进行了全面综述。它深入分析了当前的人工智能应用,重点关注其对早期诊断、治疗规划和结果预测的贡献。该综述通过研究近期的研究和技术发展,找出了现有文献中的主要差距和局限性。它旨在阐明人工智能驱动方法的演变及其对提高诊断准确性、个性化治疗策略和改善患者预后的影响。论文强调了人工智能在克服胃癌管理相关挑战方面的变革潜力,并提出了进一步利用人工智能能力的未来研究方向。通过综述,本综述强调了将人工智能技术融入临床实践以彻底改变胃癌管理的重要性。
{"title":"Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques.","authors":"Mobina Khosravi, Seyedeh Kimia Jasemi, Parsa Hayati, Hamid Akbari Javar, Saadat Izadi, Zhila Izadi","doi":"10.1016/j.compbiomed.2024.109261","DOIUrl":"10.1016/j.compbiomed.2024.109261","url":null,"abstract":"<p><p>Gastric cancer represents a significant global health challenge with elevated incidence and mortality rates, highlighting the need for advancements in diagnostic and therapeutic strategies. This review paper addresses the critical need for a thorough synthesis of the role of artificial intelligence (AI) in the management of gastric cancer. It provides an in-depth analysis of current AI applications, focusing on their contributions to early diagnosis, treatment planning, and outcome prediction. The review identifies key gaps and limitations in the existing literature by examining recent studies and technological developments. It aims to clarify the evolution of AI-driven methods and their impact on enhancing diagnostic accuracy, personalizing treatment strategies, and improving patient outcomes. The paper emphasizes the transformative potential of AI in overcoming the challenges associated with gastric cancer management and proposes future research directions to further harness AI's capabilities. Through this synthesis, the review underscores the importance of integrating AI technologies into clinical practice to revolutionize gastric cancer management.</p>","PeriodicalId":10578,"journal":{"name":"Computers in biology and medicine","volume":"183 ","pages":"109261"},"PeriodicalIF":7.0,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142564110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in biology and medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1