首页 > 最新文献

Computer methods and programs in biomedicine最新文献

英文 中文
Multimodal radiomics based on lesion connectome predicts stroke prognosis
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-03-01 DOI: 10.1016/j.cmpb.2025.108701
Ning Wu , Wei Lu , Mingze Xu

Background

Stroke significantly contributes to global mortality and disability, emphasizing the critical need for effective prognostic evaluations. Connectome-based lesion-symptom mapping (CLSM) identifies structural and functional connectivity disruptions related to the lesion, while radiomics extracts high-dimensional quantitative data from multimodal medical images. Despite the potential of these methodologies, no study has yet integrated CLSM and multimodal radiomics for acute ischemic stroke (AIS).

Methods

This retrospective study analyzed lesion, structural disconnection (SDC), and functional disconnection (FDC) maps of 148 patients with AIS and assessed their association with the National Institutes of Health Stroke Scale (NIHSS) score at admission and prognostic outcomes, measured by the modified Rankin Scale at six months. Additionally, an innovative approach was proposed by utilizing the SDC map as mask, and radiomic features were extracted and selected from T1-weighted imaging, diffusion-weighted imaging, apparent diffusion coefficient, susceptibility-weighted imaging, and fluid-attenuated inversion recovery images. Five machine learning classifiers were then used to predict the prognosis of AIS.

Results

This study constructed lesion, SDC and FDC maps to correlate with NIHSS scores and prognostic outcomes, thereby revealing the neuroanatomical mechanisms underlying neural damage and prognosis. Poor prognosis was associated with distal cortical dysfunction and fiber disconnection. Fifteen radiomic features within SDC maps from multimodal imaging were selected as inputs for machine learning models. Among the five classifiers tested, Categorical Boosting achieved the highest performance (AUC = 0.930, accuracy = 0.836).

Conclusion

A novel model integrating CLSM and multimodal radiomics was proposed to predict long-term prognosis in AIS, which would be a promising tool for early prognostic evaluation and therapeutic planning. Further investigation is needed to assess its robustness in clinical application.
{"title":"Multimodal radiomics based on lesion connectome predicts stroke prognosis","authors":"Ning Wu ,&nbsp;Wei Lu ,&nbsp;Mingze Xu","doi":"10.1016/j.cmpb.2025.108701","DOIUrl":"10.1016/j.cmpb.2025.108701","url":null,"abstract":"<div><h3>Background</h3><div>Stroke significantly contributes to global mortality and disability, emphasizing the critical need for effective prognostic evaluations. Connectome-based lesion-symptom mapping (CLSM) identifies structural and functional connectivity disruptions related to the lesion, while radiomics extracts high-dimensional quantitative data from multimodal medical images. Despite the potential of these methodologies, no study has yet integrated CLSM and multimodal radiomics for acute ischemic stroke (AIS).</div></div><div><h3>Methods</h3><div>This retrospective study analyzed lesion, structural disconnection (SDC), and functional disconnection (FDC) maps of 148 patients with AIS and assessed their association with the National Institutes of Health Stroke Scale (NIHSS) score at admission and prognostic outcomes, measured by the modified Rankin Scale at six months. Additionally, an innovative approach was proposed by utilizing the SDC map as mask, and radiomic features were extracted and selected from T1-weighted imaging, diffusion-weighted imaging, apparent diffusion coefficient, susceptibility-weighted imaging, and fluid-attenuated inversion recovery images. Five machine learning classifiers were then used to predict the prognosis of AIS.</div></div><div><h3>Results</h3><div>This study constructed lesion, SDC and FDC maps to correlate with NIHSS scores and prognostic outcomes, thereby revealing the neuroanatomical mechanisms underlying neural damage and prognosis. Poor prognosis was associated with distal cortical dysfunction and fiber disconnection. Fifteen radiomic features within SDC maps from multimodal imaging were selected as inputs for machine learning models. Among the five classifiers tested, Categorical Boosting achieved the highest performance (AUC = 0.930, accuracy = 0.836).</div></div><div><h3>Conclusion</h3><div>A novel model integrating CLSM and multimodal radiomics was proposed to predict long-term prognosis in AIS, which would be a promising tool for early prognostic evaluation and therapeutic planning. Further investigation is needed to assess its robustness in clinical application.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108701"},"PeriodicalIF":4.9,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143551622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing atrial fibrillation detection in PPG analysis with sparse labels through contrastive learning
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-27 DOI: 10.1016/j.cmpb.2025.108698
Hong Wu , Qihan Hu , Daomiao Wang , Shiwei Zhu , Cuiwei Yang

Background

With the advancements in wearable technology, photoplethysmography (PPG) has emerged as a promising technique for detecting atrial fibrillation (AF) due to its ability to capture cardiovascular information. However, current deep learning-based methods has strict requirements on the quantity of labeled data. To overcome this limitation, we explore the performance of self-supervised contrastive learning in PPG-based AF detection.

Methods

Our method initially utilizes 1,209 h of unlabeled PPG data from the VitalDB database, conducting self-supervised pretraining using two contrastive learning frameworks, SimCLR and BYOL. Subsequently, the weights of the encoder are transferred and fine-tuned on a small amount of labeled PPG data to complete the AF detection task, including the selected MIMIC III, UMass, and DeepBeat datasets. In the realm of contrastive learning, we investigated seven data augmentation operations to explore their composite and preferred combinations, as well as the effects of double-sided and single-sided transformations.

Results

Our research ultimately demonstrated that the preferred combination, incorporating single-sided transformation with the Drift operation, is most suitable for PPG data. Notably, even with only 1 %, 20 %, and 1 % of the training data from the three datasets used for fine-tuning, our approach achieves better F1 scores compared to supervised learning on the respective complete training sets. Additionally, on the 0.01 % DeepBeat training set, fine-tuning still showed a clear advantage over supervised learning.

Conclusion

Appropriate self-supervised contrastive pretraining effectively leverages a substantial amount of existing unlabeled PPG data, thus reducing the reliance on labeled data for AF detection, and offering a possible solution to address the limitations posed by the scarcity of labels.
{"title":"Enhancing atrial fibrillation detection in PPG analysis with sparse labels through contrastive learning","authors":"Hong Wu ,&nbsp;Qihan Hu ,&nbsp;Daomiao Wang ,&nbsp;Shiwei Zhu ,&nbsp;Cuiwei Yang","doi":"10.1016/j.cmpb.2025.108698","DOIUrl":"10.1016/j.cmpb.2025.108698","url":null,"abstract":"<div><h3>Background</h3><div>With the advancements in wearable technology, photoplethysmography (PPG) has emerged as a promising technique for detecting atrial fibrillation (AF) due to its ability to capture cardiovascular information. However, current deep learning-based methods has strict requirements on the quantity of labeled data. To overcome this limitation, we explore the performance of self-supervised contrastive learning in PPG-based AF detection.</div></div><div><h3>Methods</h3><div>Our method initially utilizes 1,209 h of unlabeled PPG data from the VitalDB database, conducting self-supervised pretraining using two contrastive learning frameworks, SimCLR and BYOL. Subsequently, the weights of the encoder are transferred and fine-tuned on a small amount of labeled PPG data to complete the AF detection task, including the selected MIMIC III, UMass, and DeepBeat datasets. In the realm of contrastive learning, we investigated seven data augmentation operations to explore their composite and preferred combinations, as well as the effects of double-sided and single-sided transformations.</div></div><div><h3>Results</h3><div>Our research ultimately demonstrated that the preferred combination, incorporating single-sided transformation with the Drift operation, is most suitable for PPG data. Notably, even with only 1 %, 20 %, and 1 % of the training data from the three datasets used for fine-tuning, our approach achieves better F1 scores compared to supervised learning on the respective complete training sets. Additionally, on the 0.01 % DeepBeat training set, fine-tuning still showed a clear advantage over supervised learning.</div></div><div><h3>Conclusion</h3><div>Appropriate self-supervised contrastive pretraining effectively leverages a substantial amount of existing unlabeled PPG data, thus reducing the reliance on labeled data for AF detection, and offering a possible solution to address the limitations posed by the scarcity of labels.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"264 ","pages":"Article 108698"},"PeriodicalIF":4.9,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143549324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AutoDPS: An unsupervised diffusion model based method for multiple degradation removal in MRI AutoDPS:基于无监督扩散模型的磁共振成像多重降解去除方法
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-27 DOI: 10.1016/j.cmpb.2025.108684
Arunima Sarkar , Ayantika Das , Keerthi Ram , Sriprabha Ramanarayanan , Suresh Emmanuel Joel , Mohanasankar Sivaprakasam

Background and Objective:

Diffusion models have demonstrated their ability in image generation and solving inverse problems like restoration. Unlike most existing deep-learning based image restoration techniques which rely on unpaired or paired data for degradation awareness, diffusion models offer an unsupervised degradation independent alternative. This is well-suited in the context of restoring artifact-corrupted Magnetic Resonance Images (MRI), where it is impractical to exactly model the degradations apriori. In MRI, multiple corruptions arise, for instance, from patient movement compounded by undersampling artifacts from the acquisition settings.

Methods:

To tackle this scenario, we propose AutoDPS, an unsupervised method for corruption removal in brain MRI based on Diffusion Posterior Sampling. Our method (i) performs motion-related corruption parameter estimation using a blind iterative solver, and (ii) utilizes the knowledge of the undersampling pattern when the corruption consists of both motion and undersampling artifacts. We incorporate this corruption operation during sampling to guide the generation in recovering high-quality images.

Results:

Despite being trained to denoise and tested on completely unseen corruptions, our method AutoDPS has shown 1.63 dB of improvement in PSNR over baselines for realistic 3D motion restoration and 0.5 dB of improvement for random motion with undersampling. Additionally, our experiments demonstrate AutoDPS’s resilience to noise and its generalization capability under domain shift, showcasing its robustness and adaptability.

Conclusion:

In this paper, we propose an unsupervised method that removes multiple corruptions, mainly motion with undersampling, in MRI images which are essential for accurate diagnosis. The experiments show promising results on realistic and composite artifacts with higher improvement margins as compared to other methods. Our code is available at https://github.com/arunima101/AutoDPS/tree/master
{"title":"AutoDPS: An unsupervised diffusion model based method for multiple degradation removal in MRI","authors":"Arunima Sarkar ,&nbsp;Ayantika Das ,&nbsp;Keerthi Ram ,&nbsp;Sriprabha Ramanarayanan ,&nbsp;Suresh Emmanuel Joel ,&nbsp;Mohanasankar Sivaprakasam","doi":"10.1016/j.cmpb.2025.108684","DOIUrl":"10.1016/j.cmpb.2025.108684","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Diffusion models have demonstrated their ability in image generation and solving inverse problems like restoration. Unlike most existing deep-learning based image restoration techniques which rely on unpaired or paired data for degradation awareness, diffusion models offer an unsupervised degradation independent alternative. This is well-suited in the context of restoring artifact-corrupted Magnetic Resonance Images (MRI), where it is impractical to exactly model the degradations apriori. In MRI, multiple corruptions arise, for instance, from patient movement compounded by undersampling artifacts from the acquisition settings.</div></div><div><h3>Methods:</h3><div>To tackle this scenario, we propose AutoDPS, an unsupervised method for corruption removal in brain MRI based on Diffusion Posterior Sampling. Our method (i) performs motion-related corruption parameter estimation using a blind iterative solver, and (ii) utilizes the knowledge of the undersampling pattern when the corruption consists of both motion and undersampling artifacts. We incorporate this corruption operation during sampling to guide the generation in recovering high-quality images.</div></div><div><h3>Results:</h3><div>Despite being trained to denoise and tested on completely unseen corruptions, our method AutoDPS has shown <span><math><mo>∼</mo></math></span> 1.63 dB of improvement in PSNR over baselines for realistic 3D motion restoration and <span><math><mo>∼</mo></math></span> 0.5 dB of improvement for random motion with undersampling. Additionally, our experiments demonstrate AutoDPS’s resilience to noise and its generalization capability under domain shift, showcasing its robustness and adaptability.</div></div><div><h3>Conclusion:</h3><div>In this paper, we propose an unsupervised method that removes multiple corruptions, mainly motion with undersampling, in MRI images which are essential for accurate diagnosis. The experiments show promising results on realistic and composite artifacts with higher improvement margins as compared to other methods. Our code is available at <span><span>https://github.com/arunima101/AutoDPS/tree/master</span><svg><path></path></svg></span></div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108684"},"PeriodicalIF":4.9,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pathology report generation from whole slide images with knowledge retrieval and multi-level regional feature selection
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-27 DOI: 10.1016/j.cmpb.2025.108677
Dingyi Hu , Zhiguo Jiang , Jun Shi , Fengying Xie , Kun Wu , Kunming Tang , Ming Cao , Jianguo Huai , Yushan Zheng

Background and objectives:

With the development of deep learning techniques, the computer-assisted pathology diagnosis plays a crucial role in clinical diagnosis. An important task within this field is report generation, which provides doctors with text descriptions of whole slide images (WSIs). Report generation from WSIs presents significant challenges due to the structural complexity and pathological diversity of tissues, as well as the large size and high information density of WSIs. The objective of this study is to design a histopathology report generation method that can efficiently generate reports from WSIs and is suitable for clinical practice.

Methods:

In this paper, we propose a novel approach for generating pathology reports from WSIs, leveraging knowledge retrieval and multi-level regional feature selection. To deal with the uneven distribution of pathological information in WSIs, we introduce a multi-level regional feature encoding network and a feature selection module that extracts multi-level region representations and filters out region features irrelevant to the diagnosis, enabling more efficient report generation. Moreover, we design a knowledge retrieval module to improve the report generation performance that can leverage the diagnostic information from historical cases. Additionally, we propose an out-of-domain application mode based on large language model (LLM). The use of LLM enhances the scalability of the generation model and improves its adaptability to data from different sources.

Results:

The proposed method is evaluated on a public datasets and one in-house dataset. On the public GastricADC (991 WSIs), our method outperforms state-of-the-art text generation methods and achieved 0.568 and 0.345 on metric Rouge-L and Bleu-4, respectively. On the in-house Gastric-3300 (3309 WSIs), our method achieved significantly better performance with Rouge-L of 0.690, which surpassed the second-best state-of-the-art method Wcap 6.3%.

Conclusions:

We present an advanced method for pathology report generation from WSIs, addressing the key challenges associated with the large size and complex pathological structures of these images. In particular, the multi-level regional feature selection module effectively captures diagnostically significant regions of varying sizes. The knowledge retrieval-based decoder leverages historical diagnostic data to enhance report accuracy. Our method not only improves the informativeness and relevance of the generated pathology reports but also outperforms the state-of-the-art techniques.
{"title":"Pathology report generation from whole slide images with knowledge retrieval and multi-level regional feature selection","authors":"Dingyi Hu ,&nbsp;Zhiguo Jiang ,&nbsp;Jun Shi ,&nbsp;Fengying Xie ,&nbsp;Kun Wu ,&nbsp;Kunming Tang ,&nbsp;Ming Cao ,&nbsp;Jianguo Huai ,&nbsp;Yushan Zheng","doi":"10.1016/j.cmpb.2025.108677","DOIUrl":"10.1016/j.cmpb.2025.108677","url":null,"abstract":"<div><h3>Background and objectives:</h3><div>With the development of deep learning techniques, the computer-assisted pathology diagnosis plays a crucial role in clinical diagnosis. An important task within this field is report generation, which provides doctors with text descriptions of whole slide images (WSIs). Report generation from WSIs presents significant challenges due to the structural complexity and pathological diversity of tissues, as well as the large size and high information density of WSIs. The objective of this study is to design a histopathology report generation method that can efficiently generate reports from WSIs and is suitable for clinical practice.</div></div><div><h3>Methods:</h3><div>In this paper, we propose a novel approach for generating pathology reports from WSIs, leveraging knowledge retrieval and multi-level regional feature selection. To deal with the uneven distribution of pathological information in WSIs, we introduce a multi-level regional feature encoding network and a feature selection module that extracts multi-level region representations and filters out region features irrelevant to the diagnosis, enabling more efficient report generation. Moreover, we design a knowledge retrieval module to improve the report generation performance that can leverage the diagnostic information from historical cases. Additionally, we propose an out-of-domain application mode based on large language model (LLM). The use of LLM enhances the scalability of the generation model and improves its adaptability to data from different sources.</div></div><div><h3>Results:</h3><div>The proposed method is evaluated on a public datasets and one in-house dataset. On the public GastricADC (991 WSIs), our method outperforms state-of-the-art text generation methods and achieved 0.568 and 0.345 on metric Rouge-L and Bleu-4, respectively. On the in-house Gastric-3300 (3309 WSIs), our method achieved significantly better performance with Rouge-L of 0.690, which surpassed the second-best state-of-the-art method Wcap 6.3%.</div></div><div><h3>Conclusions:</h3><div>We present an advanced method for pathology report generation from WSIs, addressing the key challenges associated with the large size and complex pathological structures of these images. In particular, the multi-level regional feature selection module effectively captures diagnostically significant regions of varying sizes. The knowledge retrieval-based decoder leverages historical diagnostic data to enhance report accuracy. Our method not only improves the informativeness and relevance of the generated pathology reports but also outperforms the state-of-the-art techniques.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108677"},"PeriodicalIF":4.9,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cooperative GAN: Automated tympanic membrane anomaly detection using a Cooperative Observation Network
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-23 DOI: 10.1016/j.cmpb.2025.108651
Dahye Song , Younghan Chung , Jaeyoung Kim , June Choi , Yeonjoon Lee

Background and Objectives:

Recently, artificial intelligence (AI) has been applied to otolaryngology. However, existing supervised learning methods cannot easily predict data outside the learning domain. Moreover, collecting diverse medical data has become demanding owing to privacy concerns. Consequently, these limitations hinder the applications of AI in clinical settings. To address these issues, this study proposes a Cooperative Observation Network (CON), using an unsupervised anomaly detection approach. Anomaly detection is the process of identifying data patterns that deviate from the majority.

Methods:

For anomaly detection, the model is trained solely on normal data and calculates an abnormality score during the decoding process of the test via the reconstruction error. The calculated score is used to detect anomalies in the second step. Unlike traditional anomaly detection, the CON method does not rely on a decoding process. Instead, it detects anomalies in a single step using the discriminator of the Generative Adversarial Network. During the training process, the discriminator differentiates between the normal data distribution and artificially generated instances. However, these instances are obtained from a random distribution that does not overlap with the distribution of normal data. Consequently, the trained discriminator can recognize distributions outside the scope fo normal data. Additionally, we expand the diagnostic scope by utilizing two clinical variables: tympanic membrane endoscopic images and pure tone audiometry (PTA).

Results:

CON detects anomalies with a high accuracy of 96.75%. This includes cases with a normal tympanic membrane but with hearing loss, perforation, cholesteatoma, or retraction; cases with two co-existing diseases; and cases that require treatment but are difficult to diagnose with specific diseases. CON significantly reduces the computational load by approximately ten times compared with existing models while maintaining high accuracy and broadening diagnostic scope.

Conclusion:

This study successfully addresses the inherent limitations of supervised learning and anomaly detection, thereby enhancing the potential of AI-based disease detection in otolaryngology for practical clinical applications. The proposed methods can be seamlessly incorporated into medical machines for real-world clinical use owing to their low reliance on the computational load. Moreover, CON requires only a small amount of training data while maintaining the ability to diagnose a broad range of diseases with high accuracy. Therefore, it can effectively aid medical professionals in diagnosing in clinical scenarios, thereby increasing the efficiency of healthcare delivery.
{"title":"Cooperative GAN: Automated tympanic membrane anomaly detection using a Cooperative Observation Network","authors":"Dahye Song ,&nbsp;Younghan Chung ,&nbsp;Jaeyoung Kim ,&nbsp;June Choi ,&nbsp;Yeonjoon Lee","doi":"10.1016/j.cmpb.2025.108651","DOIUrl":"10.1016/j.cmpb.2025.108651","url":null,"abstract":"<div><h3>Background and Objectives:</h3><div>Recently, artificial intelligence (AI) has been applied to otolaryngology. However, existing supervised learning methods cannot easily predict data outside the learning domain. Moreover, collecting diverse medical data has become demanding owing to privacy concerns. Consequently, these limitations hinder the applications of AI in clinical settings. To address these issues, this study proposes a Cooperative Observation Network (CON), using an unsupervised anomaly detection approach. Anomaly detection is the process of identifying data patterns that deviate from the majority.</div></div><div><h3>Methods:</h3><div>For anomaly detection, the model is trained solely on normal data and calculates an abnormality score during the decoding process of the test via the reconstruction error. The calculated score is used to detect anomalies in the second step. Unlike traditional anomaly detection, the CON method does not rely on a decoding process. Instead, it detects anomalies in a single step using the discriminator of the Generative Adversarial Network. During the training process, the discriminator differentiates between the normal data distribution and artificially generated instances. However, these instances are obtained from a random distribution that does not overlap with the distribution of normal data. Consequently, the trained discriminator can recognize distributions outside the scope fo normal data. Additionally, we expand the diagnostic scope by utilizing two clinical variables: tympanic membrane endoscopic images and pure tone audiometry (PTA).</div></div><div><h3>Results:</h3><div>CON detects anomalies with a high accuracy of 96.75%. This includes cases with a normal tympanic membrane but with hearing loss, perforation, cholesteatoma, or retraction; cases with two co-existing diseases; and cases that require treatment but are difficult to diagnose with specific diseases. CON significantly reduces the computational load by approximately ten times compared with existing models while maintaining high accuracy and broadening diagnostic scope.</div></div><div><h3>Conclusion:</h3><div>This study successfully addresses the inherent limitations of supervised learning and anomaly detection, thereby enhancing the potential of AI-based disease detection in otolaryngology for practical clinical applications. The proposed methods can be seamlessly incorporated into medical machines for real-world clinical use owing to their low reliance on the computational load. Moreover, CON requires only a small amount of training data while maintaining the ability to diagnose a broad range of diseases with high accuracy. Therefore, it can effectively aid medical professionals in diagnosing in clinical scenarios, thereby increasing the efficiency of healthcare delivery.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108651"},"PeriodicalIF":4.9,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bridging gaps in artificial intelligence adoption for maternal-fetal and obstetric care: Unveiling transformative capabilities and challenges
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-23 DOI: 10.1016/j.cmpb.2025.108682
Kalyan Tadepalli , Abhijit Das , Tanushree Meena , Sudipta Roy
Purpose: This review aims to comprehensively explore the application of Artificial Intelligence (AI) to an area that has not been traditionally explored in depth: the continuum of maternal-fetal health. In doing so, the intent was to examine this physiologically continuous spectrum of mother and child health, as well as to highlight potential pitfalls, and suggest solutions for the same. Method: A systematic search identified studies employing AI techniques for prediction, diagnosis, and decision support employing various modalities like imaging, electrophysiological signals and electronic health records in the domain of obstetrics and fetal health. In the selected articles then, AI applications in fetal morphology, gestational age assessment, congenital defect detection, fetal monitoring, placental analysis, and maternal physiological monitoring were critically examined both from the perspective of the domain and artificial intelligence. Result: AI-driven solutions demonstrate promising capabilities in medical diagnostics and risk prediction, offering automation, improved accuracy, and the potential for personalized medicine. However, challenges regarding data availability, algorithmic transparency, and ethical considerations must be overcome to ensure responsible and effective clinical implementation. These challenges must be urgently addressed to ensure a domain as critical to public health as obstetrics and fetal health, is able to fully benefit from the gigantic strides made in the field of artificial intelligence. Conclusion: Open access to relevant datasets is crucial for equitable progress in this critical public health domain. Integrating responsible and explainable AI, while addressing ethical considerations, is essential to maximize the public health benefits of AI-driven solutions in maternal-fetal care.
{"title":"Bridging gaps in artificial intelligence adoption for maternal-fetal and obstetric care: Unveiling transformative capabilities and challenges","authors":"Kalyan Tadepalli ,&nbsp;Abhijit Das ,&nbsp;Tanushree Meena ,&nbsp;Sudipta Roy","doi":"10.1016/j.cmpb.2025.108682","DOIUrl":"10.1016/j.cmpb.2025.108682","url":null,"abstract":"<div><div>Purpose: This review aims to comprehensively explore the application of Artificial Intelligence (AI) to an area that has not been traditionally explored in depth: the continuum of maternal-fetal health. In doing so, the intent was to examine this physiologically continuous spectrum of mother and child health, as well as to highlight potential pitfalls, and suggest solutions for the same. Method: A systematic search identified studies employing AI techniques for prediction, diagnosis, and decision support employing various modalities like imaging, electrophysiological signals and electronic health records in the domain of obstetrics and fetal health. In the selected articles then, AI applications in fetal morphology, gestational age assessment, congenital defect detection, fetal monitoring, placental analysis, and maternal physiological monitoring were critically examined both from the perspective of the domain and artificial intelligence. Result: AI-driven solutions demonstrate promising capabilities in medical diagnostics and risk prediction, offering automation, improved accuracy, and the potential for personalized medicine. However, challenges regarding data availability, algorithmic transparency, and ethical considerations must be overcome to ensure responsible and effective clinical implementation. These challenges must be urgently addressed to ensure a domain as critical to public health as obstetrics and fetal health, is able to fully benefit from the gigantic strides made in the field of artificial intelligence. Conclusion: Open access to relevant datasets is crucial for equitable progress in this critical public health domain. Integrating responsible and explainable AI, while addressing ethical considerations, is essential to maximize the public health benefits of AI-driven solutions in maternal-fetal care.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108682"},"PeriodicalIF":4.9,"publicationDate":"2025-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Low dose computed tomography reconstruction with momentum-based frequency adjustment network
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-22 DOI: 10.1016/j.cmpb.2025.108673
Qixiang Sun , Ning He , Ping Yang , Xing Zhao

Background and Objective:

Recent investigations into Low-Dose Computed Tomography (LDCT) reconstruction methods have brought Model-Based Data-Driven (MBDD) approaches to the forefront. One prominent architecture within MBDD entails the integration of Model-Based Iterative Reconstruction (MBIR) with Deep Learning (DL). While this approach offers the advantage of harnessing information from sinogram and image domains, it also reveals several deficiencies. First and foremost, the efficacy of DL methods within the realm of MBDD necessitates meticulous enhancement, as it directly impacts the computational cost and the quality of reconstructed images. Next, high computational costs and a high number of iterations limit the development of MBDD methods. Last but not least, CT reconstruction is sensitive to pixel accuracy, and the role of loss functions within DL methods is crucial for meeting this requirement.

Methods:

This paper advances MBDD methods through three principal contributions. Firstly, we introduce an innovative Frequency Adjustment Network (FAN) that effectively adjusts both high and low-frequency components during the inference phase, resulting in substantial enhancements in reconstruction performance. Second, we develop the Momentum-based Frequency Adjustment Network (MFAN), which leverages momentum terms as an extrapolation strategy to facilitate the amplification of changes throughout successive iterations, culminating in a rapid convergence framework. Lastly, we delve into the visual properties of CT images and present a unique loss function named Focal Detail Loss (FDL). The FDL function preserves fine details throughout the training phase, significantly improving reconstruction quality.

Results:

Through a series of experiments validation on the AAPM-Mayo public dataset and real-world piglet datasets, the aforementioned three contributions demonstrated superior performance. MFAN achieved convergence in 10 iterations as an iteration method, faster than other methods. Ablation studies further highlight the advanced performance of each contribution.

Conclusions:

This paper presents an MBDD-based LDCT reconstruction method using a momentum-based frequency adjustment network with a focal detail loss function. This approach significantly reduces the number of iterations required for convergence while achieving superior reconstruction results in visual and numerical analyses.
{"title":"Low dose computed tomography reconstruction with momentum-based frequency adjustment network","authors":"Qixiang Sun ,&nbsp;Ning He ,&nbsp;Ping Yang ,&nbsp;Xing Zhao","doi":"10.1016/j.cmpb.2025.108673","DOIUrl":"10.1016/j.cmpb.2025.108673","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>Recent investigations into Low-Dose Computed Tomography (LDCT) reconstruction methods have brought Model-Based Data-Driven (MBDD) approaches to the forefront. One prominent architecture within MBDD entails the integration of Model-Based Iterative Reconstruction (MBIR) with Deep Learning (DL). While this approach offers the advantage of harnessing information from sinogram and image domains, it also reveals several deficiencies. First and foremost, the efficacy of DL methods within the realm of MBDD necessitates meticulous enhancement, as it directly impacts the computational cost and the quality of reconstructed images. Next, high computational costs and a high number of iterations limit the development of MBDD methods. Last but not least, CT reconstruction is sensitive to pixel accuracy, and the role of loss functions within DL methods is crucial for meeting this requirement.</div></div><div><h3>Methods:</h3><div>This paper advances MBDD methods through three principal contributions. Firstly, we introduce an innovative Frequency Adjustment Network (FAN) that effectively adjusts both high and low-frequency components during the inference phase, resulting in substantial enhancements in reconstruction performance. Second, we develop the Momentum-based Frequency Adjustment Network (MFAN), which leverages momentum terms as an extrapolation strategy to facilitate the amplification of changes throughout successive iterations, culminating in a rapid convergence framework. Lastly, we delve into the visual properties of CT images and present a unique loss function named Focal Detail Loss (FDL). The FDL function preserves fine details throughout the training phase, significantly improving reconstruction quality.</div></div><div><h3>Results:</h3><div>Through a series of experiments validation on the AAPM-Mayo public dataset and real-world piglet datasets, the aforementioned three contributions demonstrated superior performance. MFAN achieved convergence in 10 iterations as an iteration method, faster than other methods. Ablation studies further highlight the advanced performance of each contribution.</div></div><div><h3>Conclusions:</h3><div>This paper presents an MBDD-based LDCT reconstruction method using a momentum-based frequency adjustment network with a focal detail loss function. This approach significantly reduces the number of iterations required for convergence while achieving superior reconstruction results in visual and numerical analyses.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108673"},"PeriodicalIF":4.9,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143520370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of aortic branch retention strategies on thrombus growth prediction in type B aortic dissection: A hemodynamic study
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-22 DOI: 10.1016/j.cmpb.2025.108679
Jun Wen , Qingyuan Huang , Xiaoqin Chen , Kaiyue Zhang , Liqing Peng

Background

Type B Aortic Dissection (TBAD) is a serious cardiovascular condition treated effectively by TEVAR (Thoracic Endovascular Aortic Repair), which promotes false lumen thrombosis with minimal invasiveness. However, the impact of aortic branch retention strategies on thrombus growth prediction is often underestimated.

Method

This study numerically investigated four branch retention strategies: preserving all branches (Type 1 strategy), removing all branches (Type 2 strategy), removing only the aortic arch branches (Type 3 strategy), and removing only the abdominal aortic branches (Type 4 strategy).

Results

Type 4 strategy demonstrates similar hemodynamic stability, shear stress distribution, and thrombus formation risk as Type 1, while simplifying the anatomical structure. In contrast, complete branch removal (Type 2) and retention of only the aortic arch branches (Type 3) lead to significant flow disturbances and hemodynamic instability, potentially increasing the risk of false lumen expansion and thrombus misjudgment. Additionally, Type 4 strategy shows potential value in image simplification and deep learning applications by reducing the workload of image segmentation and 3D reconstruction while improving model training efficiency and accuracy.

Conclusion

This study recommends prioritizing the Type 4 strategy in aortic image simplification and TEVAR surgical planning to maintain hemodynamic stability while reducing computational complexity. This approach has significant implications for both personalized treatment and deep learning-based analyses.
{"title":"Impact of aortic branch retention strategies on thrombus growth prediction in type B aortic dissection: A hemodynamic study","authors":"Jun Wen ,&nbsp;Qingyuan Huang ,&nbsp;Xiaoqin Chen ,&nbsp;Kaiyue Zhang ,&nbsp;Liqing Peng","doi":"10.1016/j.cmpb.2025.108679","DOIUrl":"10.1016/j.cmpb.2025.108679","url":null,"abstract":"<div><h3>Background</h3><div>Type B Aortic Dissection (TBAD) is a serious cardiovascular condition treated effectively by TEVAR (Thoracic Endovascular Aortic Repair), which promotes false lumen thrombosis with minimal invasiveness. However, the impact of aortic branch retention strategies on thrombus growth prediction is often underestimated.</div></div><div><h3>Method</h3><div>This study numerically investigated four branch retention strategies: preserving all branches (Type 1 strategy), removing all branches (Type 2 strategy), removing only the aortic arch branches (Type 3 strategy), and removing only the abdominal aortic branches (Type 4 strategy).</div></div><div><h3>Results</h3><div>Type 4 strategy demonstrates similar hemodynamic stability, shear stress distribution, and thrombus formation risk as Type 1, while simplifying the anatomical structure. In contrast, complete branch removal (Type 2) and retention of only the aortic arch branches (Type 3) lead to significant flow disturbances and hemodynamic instability, potentially increasing the risk of false lumen expansion and thrombus misjudgment. Additionally, Type 4 strategy shows potential value in image simplification and deep learning applications by reducing the workload of image segmentation and 3D reconstruction while improving model training efficiency and accuracy.</div></div><div><h3>Conclusion</h3><div>This study recommends prioritizing the Type 4 strategy in aortic image simplification and TEVAR surgical planning to maintain hemodynamic stability while reducing computational complexity. This approach has significant implications for both personalized treatment and deep learning-based analyses.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108679"},"PeriodicalIF":4.9,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143529706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation of skin layers on HFUS images using the attention mechanism
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-21 DOI: 10.1016/j.cmpb.2025.108668
Anna Slian , Katarzyna Korecka , Adriana Polańska , Joanna Czajkowska

Background and Objective:

The fast development of imaging techniques in recent years has opened new diagnostic paths also in dermatology, where high-frequency ultrasound (HFUS) enables the visualization of superficial structures. At the same time, automated ultrasound image analysis algorithms have started to be widely described in the literature. Although the newest deep learning models can classify the images without the previous segmentation steps, they are often the first part of a computer-aided diagnosis framework that helps further measurements. For the clinical evaluation, the parameters of skin layers: entry echo, SLEB and dermis, are the most important for differential diagnosis and accurate evaluation of treatment process.

Methods:

The paper presents a novel neural network model combining contextual feature pyramid blocks with attention gates to segment skin layers accurately. In addition, a sequential model was tested that pre-segmented the entry echo layer as the most characteristic element in the skin ultrasound image. For the first time, we segmented three skin layers: the entry echo layer, SLEB, and dermis. The developed method is verified using two different HFUS image databases containing images acquired with different ultrasound machines and ultrasound probe frequencies. Measures of models’ performance were proposed, assessing the percentage of cases where the model classified the whole image as background and two focusing on the SLEB layer: percentages of false positive and false negatives detections.

Results:

The average Dice indexes, obtained on the dataset recorded for this study, were 0.95, 0.85 and 0.93, respectively for the entry echo, SLEB and dermis. For models trained without transfer learning, proposed architectures were the only ones that detected the skin correctly every time. Both models achieved the lowest false positive (0.35% and 0%) and false negative (4.48% and 3.66%) rates during the experiments.

Conclusion:

Contextual feature pyramid modules and attention gates allow more accurate detection and segmentation of skin layers. The results obtained are compared with other models described in the literature as efficient for HFUS image analysis, and low false positive and false negative rates speak in favor of our approach.
{"title":"Segmentation of skin layers on HFUS images using the attention mechanism","authors":"Anna Slian ,&nbsp;Katarzyna Korecka ,&nbsp;Adriana Polańska ,&nbsp;Joanna Czajkowska","doi":"10.1016/j.cmpb.2025.108668","DOIUrl":"10.1016/j.cmpb.2025.108668","url":null,"abstract":"<div><h3>Background and Objective:</h3><div>The fast development of imaging techniques in recent years has opened new diagnostic paths also in dermatology, where high-frequency ultrasound (HFUS) enables the visualization of superficial structures. At the same time, automated ultrasound image analysis algorithms have started to be widely described in the literature. Although the newest deep learning models can classify the images without the previous segmentation steps, they are often the first part of a computer-aided diagnosis framework that helps further measurements. For the clinical evaluation, the parameters of skin layers: entry echo, SLEB and dermis, are the most important for differential diagnosis and accurate evaluation of treatment process.</div></div><div><h3>Methods:</h3><div>The paper presents a novel neural network model combining contextual feature pyramid blocks with attention gates to segment skin layers accurately. In addition, a sequential model was tested that pre-segmented the entry echo layer as the most characteristic element in the skin ultrasound image. For the first time, we segmented three skin layers: the entry echo layer, SLEB, and dermis. The developed method is verified using two different HFUS image databases containing images acquired with different ultrasound machines and ultrasound probe frequencies. Measures of models’ performance were proposed, assessing the percentage of cases where the model classified the whole image as background and two focusing on the SLEB layer: percentages of false positive and false negatives detections.</div></div><div><h3>Results:</h3><div>The average Dice indexes, obtained on the dataset recorded for this study, were 0.95, 0.85 and 0.93, respectively for the entry echo, SLEB and dermis. For models trained without transfer learning, proposed architectures were the only ones that detected the skin correctly every time. Both models achieved the lowest false positive (0.35% and 0%) and false negative (4.48% and 3.66%) rates during the experiments.</div></div><div><h3>Conclusion:</h3><div>Contextual feature pyramid modules and attention gates allow more accurate detection and segmentation of skin layers. The results obtained are compared with other models described in the literature as efficient for HFUS image analysis, and low false positive and false negative rates speak in favor of our approach.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108668"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modelling ventilation with spontaneous breaths: Improving accuracy with shape functions and slice method
IF 4.9 2区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-21 DOI: 10.1016/j.cmpb.2025.108685
Ivan Ruiz , Guillermo Jaramillo , José I. García , Andres Valencia , Alejandro Segura , Andrés Fabricio Caballero-Lozada

Background and objective

Accurate detection of spontaneous breathings (SBs) and respiratory asynchronies during mechanical ventilation (MV) is essential for optimizing patient care and preventing lung injuries. Conventional models often fail to capture these events with sufficient accuracy. To address this gap, this study introduces new equations incorporating custom shape functions and the Slice method, aiming to deliver a more robust, “bedside” model with potential applications in real-time asynchrony detection.

Methods

Three new equations were developed to incorporate shape functions accounting for pressure- and volume-dependent changes in elastance, and a fourth model combined these shape functions with the Slice method. Retrospective data from 8 ICU patients (each providing 6 mins of ventilatory data) were split into two datasets of 4 patients each: one for model development and refinement, and the other for testing performance in reproducing ventilatory waveforms. Model accuracy was assessed using the coefficient of determination (R2) and Mean Residual Error (MRE). This evaluation focused on how effectively each model captured actual patient breathing mechanics, particularly in the presence of SBs or respiratory asynchronies.

Results

The proposed models, especially the one combining shape functions with the Slice method—Recruitment Distention Elastance Analysis + Slice (RDEA + Slice)—exhibited a strong correlation with patient data, evidenced by high R2 values. While conventional models achieved R2 coefficients between 0.25 and 0.87, the new models improved these to 0.90–0.97. The RDEA + Slice model attained significantly lower MRE values (0.012–0.032), underscoring its superior accuracy in capturing dynamic changes. Furthermore, a unique identifiability analysis confirmed that the model parameters can be reliably estimated, supporting its potential for clinical application.

Conclusions

The new bedside models, especially RDEA + Slice, demonstrate promise in enhancing mechanical ventilation management. By accurately capturing ventilatory mechanics in presence of SBs, they hold potential to refine ventilator settings, reduce lung injury risks, and integrate with real-time diagnostic tools for detecting patient-ventilator asynchronies—ultimately supporting more personalized and effective ICU care.
{"title":"Modelling ventilation with spontaneous breaths: Improving accuracy with shape functions and slice method","authors":"Ivan Ruiz ,&nbsp;Guillermo Jaramillo ,&nbsp;José I. García ,&nbsp;Andres Valencia ,&nbsp;Alejandro Segura ,&nbsp;Andrés Fabricio Caballero-Lozada","doi":"10.1016/j.cmpb.2025.108685","DOIUrl":"10.1016/j.cmpb.2025.108685","url":null,"abstract":"<div><h3>Background and objective</h3><div>Accurate detection of spontaneous breathings (SBs) and respiratory asynchronies during mechanical ventilation (MV) is essential for optimizing patient care and preventing lung injuries. Conventional models often fail to capture these events with sufficient accuracy. To address this gap, this study introduces new equations incorporating custom shape functions and the Slice method, aiming to deliver a more robust, “bedside” model with potential applications in real-time asynchrony detection.</div></div><div><h3>Methods</h3><div>Three new equations were developed to incorporate shape functions accounting for pressure- and volume-dependent changes in elastance, and a fourth model combined these shape functions with the Slice method. Retrospective data from 8 ICU patients (each providing 6 mins of ventilatory data) were split into two datasets of 4 patients each: one for model development and refinement, and the other for testing performance in reproducing ventilatory waveforms. Model accuracy was assessed using the coefficient of determination (<em>R</em><sup>2</sup>) and Mean Residual Error (MRE). This evaluation focused on how effectively each model captured actual patient breathing mechanics, particularly in the presence of SBs or respiratory asynchronies.</div></div><div><h3>Results</h3><div>The proposed models, especially the one combining shape functions with the Slice method—Recruitment Distention Elastance Analysis + Slice (RDEA + Slice)—exhibited a strong correlation with patient data, evidenced by high <em>R</em><sup>2</sup> values. While conventional models achieved <em>R</em><sup>2</sup> coefficients between 0.25 and 0.87, the new models improved these to 0.90–0.97. The RDEA + Slice model attained significantly lower MRE values (0.012–0.032), underscoring its superior accuracy in capturing dynamic changes. Furthermore, a unique identifiability analysis confirmed that the model parameters can be reliably estimated, supporting its potential for clinical application<em>.</em></div></div><div><h3>Conclusions</h3><div>The new bedside models, especially RDEA + Slice, demonstrate promise in enhancing mechanical ventilation management. By accurately capturing ventilatory mechanics in presence of SBs, they hold potential to refine ventilator settings, reduce lung injury risks, and integrate with real-time diagnostic tools for detecting patient-ventilator asynchronies—ultimately supporting more personalized and effective ICU care.</div></div>","PeriodicalId":10624,"journal":{"name":"Computer methods and programs in biomedicine","volume":"263 ","pages":"Article 108685"},"PeriodicalIF":4.9,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143510624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer methods and programs in biomedicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1