Pub Date : 2024-10-25DOI: 10.1007/s11517-024-03226-5
Elif Kanca, Selen Ayas, Elif Baykal Kablan, Murat Ekinci
Deep neural networks (DNNs) have demonstrated exceptional performance in medical image analysis. However, recent studies have uncovered significant vulnerabilities in DNN models, particularly their susceptibility to adversarial attacks that manipulate these models into making inaccurate predictions. Vision Transformers (ViTs), despite their advanced capabilities in medical imaging tasks, have not been thoroughly evaluated for their robustness against such attacks in this domain. This study addresses this research gap by conducting an extensive analysis of various adversarial attacks on ViTs specifically within medical imaging contexts. We explore adversarial training as a potential defense mechanism and assess the resilience of ViT models against state-of-the-art adversarial attacks and defense strategies using publicly available benchmark medical image datasets. Our findings reveal that ViTs are vulnerable to adversarial attacks even with minimal perturbations, although adversarial training significantly enhances their robustness, achieving over 80% classification accuracy. Additionally, we perform a comparative analysis with state-of-the-art convolutional neural network models, highlighting the unique strengths and weaknesses of ViTs in handling adversarial threats. This research advances the understanding of ViTs robustness in medical imaging and provides insights into their practical deployment in real-world scenarios.
深度神经网络(DNN)在医学图像分析中表现出卓越的性能。然而,最近的研究发现 DNN 模型存在重大漏洞,特别是容易受到对抗性攻击,这些攻击会操纵这些模型做出不准确的预测。尽管视觉变换器(ViTs)在医学成像任务中具有先进的功能,但尚未对其在该领域抵御此类攻击的鲁棒性进行全面评估。本研究针对这一研究空白,广泛分析了医疗成像背景下对视觉变换器的各种对抗性攻击。我们探讨了作为潜在防御机制的对抗性训练,并利用公开的基准医学图像数据集评估了 ViT 模型对最先进的对抗性攻击和防御策略的适应能力。我们的研究结果表明,尽管对抗性训练能显著增强 ViT 的鲁棒性,使其分类准确率达到 80% 以上,但即使是最小的扰动,ViT 也很容易受到对抗性攻击。此外,我们还与最先进的卷积神经网络模型进行了比较分析,突出了 ViT 在处理对抗性威胁方面的独特优势和弱点。这项研究加深了人们对 ViT 在医学成像中的鲁棒性的理解,并为 ViT 在现实世界中的实际应用提供了真知灼见。
{"title":"Evaluating and enhancing the robustness of vision transformers against adversarial attacks in medical imaging.","authors":"Elif Kanca, Selen Ayas, Elif Baykal Kablan, Murat Ekinci","doi":"10.1007/s11517-024-03226-5","DOIUrl":"10.1007/s11517-024-03226-5","url":null,"abstract":"<p><p>Deep neural networks (DNNs) have demonstrated exceptional performance in medical image analysis. However, recent studies have uncovered significant vulnerabilities in DNN models, particularly their susceptibility to adversarial attacks that manipulate these models into making inaccurate predictions. Vision Transformers (ViTs), despite their advanced capabilities in medical imaging tasks, have not been thoroughly evaluated for their robustness against such attacks in this domain. This study addresses this research gap by conducting an extensive analysis of various adversarial attacks on ViTs specifically within medical imaging contexts. We explore adversarial training as a potential defense mechanism and assess the resilience of ViT models against state-of-the-art adversarial attacks and defense strategies using publicly available benchmark medical image datasets. Our findings reveal that ViTs are vulnerable to adversarial attacks even with minimal perturbations, although adversarial training significantly enhances their robustness, achieving over 80% classification accuracy. Additionally, we perform a comparative analysis with state-of-the-art convolutional neural network models, highlighting the unique strengths and weaknesses of ViTs in handling adversarial threats. This research advances the understanding of ViTs robustness in medical imaging and provides insights into their practical deployment in real-world scenarios.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1007/s11517-024-03216-7
Arjet Nievergeld, Bünyamin Çetinkaya, Esther Maas, Marc van Sambeek, Richard Lopata, Navchetan Awasthi
Ultrasound (US)-based patient-specific rupture risk analysis of abdominal aortic aneurysms (AAAs) has shown promising results. Input for these models is the patient-specific geometry of the AAA. However, segmentation of the intraluminal thrombus (ILT) remains challenging in US images due to the low ILT-blood contrast. This study aims to improve AAA and ILT segmentation in time-resolved three-dimensional (3D + t) US images using a deep learning approach. In this study a "no new net" (nnU-Net) model was trained on 3D + t US data using either US-based or (co-registered) computed tomography (CT)-based annotations. The optimal training strategy for this low-contrast data was determined for a limited dataset. The merit of augmentation was investigated, as well as the inclusion of low-contrast areas. Segmentation results were validated with CT-based geometries as the ground truth. The model trained on CT-based masks showed the best performance in terms of DICE index, Hausdorff distance, and diameter differences, covering a larger part of the AAA. With a higher accuracy and less manual input the model outperforms conventional methods, with a mean Hausdorff distance of 4.4 mm for the vessel and 7.8 mm for the lumen. However, visibility of the lumen-ILT interface remains the limiting factor, necessitating improvements in image acquisition to ensure broader patient inclusion and enable rupture risk assessment of AAAs in the future.
基于超声波(US)的腹主动脉瘤(AAA)患者特异性破裂风险分析已显示出良好的效果。这些模型的输入是患者特异性 AAA 的几何形状。然而,由于腔内血栓-血液对比度较低,在 US 图像中分割腔内血栓(ILT)仍具有挑战性。本研究旨在利用深度学习方法改进时间分辨三维(3D + t)US 图像中 AAA 和 ILT 的分割。在这项研究中,使用基于 US 或(联合注册)基于计算机断层扫描(CT)的注释,在 3D + t US 数据上训练了一个 "无新网"(nnU-Net)模型。针对有限的数据集,确定了这种低对比度数据的最佳训练策略。研究了增强的优点,以及纳入低对比度区域的问题。以基于 CT 的几何图形为基本事实,对分割结果进行了验证。基于 CT 掩膜训练的模型在 DICE 指数、豪斯多夫距离和直径差异方面表现最佳,覆盖了 AAA 的大部分区域。该模型的准确度更高,手动输入更少,其表现优于传统方法,血管的平均 Hausdorff 距离为 4.4 毫米,管腔的平均 Hausdorff 距离为 7.8 毫米。然而,管腔-ILT 接口的可见度仍然是限制因素,因此有必要改进图像采集,以确保更广泛地纳入患者,并在未来对 AAA 进行破裂风险评估。
{"title":"Deep learning-based segmentation of abdominal aortic aneurysms and intraluminal thrombus in 3D ultrasound images.","authors":"Arjet Nievergeld, Bünyamin Çetinkaya, Esther Maas, Marc van Sambeek, Richard Lopata, Navchetan Awasthi","doi":"10.1007/s11517-024-03216-7","DOIUrl":"https://doi.org/10.1007/s11517-024-03216-7","url":null,"abstract":"<p><p>Ultrasound (US)-based patient-specific rupture risk analysis of abdominal aortic aneurysms (AAAs) has shown promising results. Input for these models is the patient-specific geometry of the AAA. However, segmentation of the intraluminal thrombus (ILT) remains challenging in US images due to the low ILT-blood contrast. This study aims to improve AAA and ILT segmentation in time-resolved three-dimensional (3D + t) US images using a deep learning approach. In this study a \"no new net\" (nnU-Net) model was trained on 3D + t US data using either US-based or (co-registered) computed tomography (CT)-based annotations. The optimal training strategy for this low-contrast data was determined for a limited dataset. The merit of augmentation was investigated, as well as the inclusion of low-contrast areas. Segmentation results were validated with CT-based geometries as the ground truth. The model trained on CT-based masks showed the best performance in terms of DICE index, Hausdorff distance, and diameter differences, covering a larger part of the AAA. With a higher accuracy and less manual input the model outperforms conventional methods, with a mean Hausdorff distance of 4.4 mm for the vessel and 7.8 mm for the lumen. However, visibility of the lumen-ILT interface remains the limiting factor, necessitating improvements in image acquisition to ensure broader patient inclusion and enable rupture risk assessment of AAAs in the future.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.1007/s11517-024-03219-4
Mengxin Li, Fan Lv, Jiaming Chen, Kunyan Zheng, Jingwen Zhao
Cerebrovascular image segmentation is one of the crucial tasks in the field of biomedical image processing. Due to the variable morphology of cerebral blood vessels, the traditional convolutional kernel is weak in perceiving the structure of elongated blood vessels in the brain, and it is easy to lose the feature information of the elongated blood vessels during the network training process. In this paper, a vascular convolutional U-network (VCU-Net) is proposed to address these problems. This network utilizes a new convolution (vascular convolution) instead of the traditional convolution kernel, to extract features of elongated blood vessels in the brain with different morphologies and orientations by adaptive convolution. In the network encoding stage, a new feature splicing method is used to combine the feature tensor obtained through vascular convolution with the original tensor to provide richer feature information. Experiments show that the DSC and IOU of the proposed method are 53.57% and 69.74%, which are improved by 2.11% and 2.01% over the best performance of the GVC-Net among several typical models. In image visualization, the proposed network has better segmentation performance for complex cerebrovascular structures, especially in dealing with elongated blood vessels in the brain, which shows better integrity and continuity.
{"title":"VCU-Net: a vascular convolutional network with feature splicing for cerebrovascular image segmentation.","authors":"Mengxin Li, Fan Lv, Jiaming Chen, Kunyan Zheng, Jingwen Zhao","doi":"10.1007/s11517-024-03219-4","DOIUrl":"https://doi.org/10.1007/s11517-024-03219-4","url":null,"abstract":"<p><p>Cerebrovascular image segmentation is one of the crucial tasks in the field of biomedical image processing. Due to the variable morphology of cerebral blood vessels, the traditional convolutional kernel is weak in perceiving the structure of elongated blood vessels in the brain, and it is easy to lose the feature information of the elongated blood vessels during the network training process. In this paper, a vascular convolutional U-network (VCU-Net) is proposed to address these problems. This network utilizes a new convolution (vascular convolution) instead of the traditional convolution kernel, to extract features of elongated blood vessels in the brain with different morphologies and orientations by adaptive convolution. In the network encoding stage, a new feature splicing method is used to combine the feature tensor obtained through vascular convolution with the original tensor to provide richer feature information. Experiments show that the DSC and IOU of the proposed method are 53.57% and 69.74%, which are improved by 2.11% and 2.01% over the best performance of the GVC-Net among several typical models. In image visualization, the proposed network has better segmentation performance for complex cerebrovascular structures, especially in dealing with elongated blood vessels in the brain, which shows better integrity and continuity.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proper organ functioning relies on adequate blood circulation; thus, monitoring blood flow is crucial for early disease diagnosis. Laser speckle contrast imaging (LSCI) is a noninvasive technique that is widely used for measuring superficial blood flow. In this study, we developed a portable LSCI system using an 805-nm near-infrared laser and a monochrome CMOS camera with a 10 × macro zoom lens. The system achieved a high-resolution imaging (1280 × 1024 pixels) with a working distance of 10 to 35 cm. The relative flow velocities were visualized via a spatial speckle contrast analysis algorithm with a 5 × 5 sliding window. In vitro experiments demonstrated the system's ability to image flow velocities in a fluid model, and a linear relationship was observed between the actual flow rate and the relative flow rate obtained by the system. The correlation coefficient (R2) exceeded 0.83 for volumetric flow rates of 0 to 0.2 ml/min when channel widths were greater than 1.2 mm, and R2 > 0.94 was obtained for channel widths exceeding 1.6 mm. Comparisons with laser Doppler flowmetry (LDF) revealed a strong positive correlation between the LSCI and LDF results. In vivo experiments captured postocclusive reactive hyperemic responses in rat hind limbs and human palms and feet. The main research contribution is the development of this compact and portable LSCI device, as well as the validation of its reliability and convenience in various scenarios and environments. Future applications of this technology include evaluating blood flow changes during skin injuries, such as abrasions, burns, and diabetic foot ulcers, to aid medical institutions in treatment optimization and to reduce treatment duration.
{"title":"Comprehensive validation of a compact laser speckle contrast imaging system for vascular function assessment: from the laboratory to the clinic.","authors":"Meng-Che Hsieh, Chia-Yu Chang, Ching-Han Hsu, Congo Tak Shing Ching, Lun-De Liao","doi":"10.1007/s11517-024-03211-y","DOIUrl":"https://doi.org/10.1007/s11517-024-03211-y","url":null,"abstract":"<p><p>Proper organ functioning relies on adequate blood circulation; thus, monitoring blood flow is crucial for early disease diagnosis. Laser speckle contrast imaging (LSCI) is a noninvasive technique that is widely used for measuring superficial blood flow. In this study, we developed a portable LSCI system using an 805-nm near-infrared laser and a monochrome CMOS camera with a 10 × macro zoom lens. The system achieved a high-resolution imaging (1280 × 1024 pixels) with a working distance of 10 to 35 cm. The relative flow velocities were visualized via a spatial speckle contrast analysis algorithm with a 5 × 5 sliding window. In vitro experiments demonstrated the system's ability to image flow velocities in a fluid model, and a linear relationship was observed between the actual flow rate and the relative flow rate obtained by the system. The correlation coefficient (R<sup>2</sup>) exceeded 0.83 for volumetric flow rates of 0 to 0.2 ml/min when channel widths were greater than 1.2 mm, and R<sup>2</sup> > 0.94 was obtained for channel widths exceeding 1.6 mm. Comparisons with laser Doppler flowmetry (LDF) revealed a strong positive correlation between the LSCI and LDF results. In vivo experiments captured postocclusive reactive hyperemic responses in rat hind limbs and human palms and feet. The main research contribution is the development of this compact and portable LSCI device, as well as the validation of its reliability and convenience in various scenarios and environments. Future applications of this technology include evaluating blood flow changes during skin injuries, such as abrasions, burns, and diabetic foot ulcers, to aid medical institutions in treatment optimization and to reduce treatment duration.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142511894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bone metastasis is one of the most common forms of metastasis in the late stages of malignancy. The early detection of bone metastases can help clinicians develop appropriate treatment plans. CT images are essential for diagnosing and assessing bone metastases in clinical practice. However, early bone metastasis lesions occupy a small part of the image and display variable sizes as the condition progresses, which adds complexity to the detection. To improve diagnostic efficiency, this paper proposes a novel algorithm-MFP-YOLO. Building on the YOLOv5 algorithm, this approach introduces a feature extraction module capable of capturing global information and designs a new content-aware feature pyramid structure to improve the network's capability in processing lesions of varying sizes. Moreover, this paper innovatively applies a transformer-structure decoder to bone metastasis detection. A dataset comprising 3921 CT images was created specifically for this task. The proposed method outperforms the baseline model with a 5.5% increase in precision and a 7.7% boost in recall. The experimental results indicate that this method can meet the needs of bone metastasis detection tasks in real scenarios and provide assistance for medical diagnosis.
{"title":"MFP-YOLO: a multi-scale feature perception network for CT bone metastasis detection.","authors":"Wenrui Lu, Wei Zhang, Yanyan Liu, Lingyun Xu, Yimeng Fan, Zhaowei Meng, Qiang Jia","doi":"10.1007/s11517-024-03221-w","DOIUrl":"https://doi.org/10.1007/s11517-024-03221-w","url":null,"abstract":"<p><p>Bone metastasis is one of the most common forms of metastasis in the late stages of malignancy. The early detection of bone metastases can help clinicians develop appropriate treatment plans. CT images are essential for diagnosing and assessing bone metastases in clinical practice. However, early bone metastasis lesions occupy a small part of the image and display variable sizes as the condition progresses, which adds complexity to the detection. To improve diagnostic efficiency, this paper proposes a novel algorithm-MFP-YOLO. Building on the YOLOv5 algorithm, this approach introduces a feature extraction module capable of capturing global information and designs a new content-aware feature pyramid structure to improve the network's capability in processing lesions of varying sizes. Moreover, this paper innovatively applies a transformer-structure decoder to bone metastasis detection. A dataset comprising 3921 CT images was created specifically for this task. The proposed method outperforms the baseline model with a 5.5% increase in precision and a 7.7% boost in recall. The experimental results indicate that this method can meet the needs of bone metastasis detection tasks in real scenarios and provide assistance for medical diagnosis.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1007/s11517-024-03195-9
Sizhe Zhao, Qi Sun, Jinzhu Yang, Yuliang Yuan, Yan Huang, Zhiqing Li
Unsupervised domain adaptation (UDA) has received interest as a means to alleviate the burden of data annotation. Nevertheless, existing UDA segmentation methods exhibit performance degradation in fine intracranial vessel segmentation tasks due to the problem of structure mismatch in the image synthesis procedure. To improve the image synthesis quality and the segmentation performance, a novel UDA segmentation method with structure preservation approaches, named StruP-Net, is proposed. The StruP-Net employs adversarial learning for image synthesis and utilizes two domain-specific segmentation networks to enhance the semantic consistency between real images and synthesized images. Additionally, two distinct structure preservation approaches, feature-level structure preservation (F-SP) and image-level structure preservation (I-SP), are proposed to alleviate the problem of structure mismatch in the image synthesis procedure. The F-SP, composed of two domain-specific graph convolutional networks (GCN), focuses on providing feature-level constraints to enhance the structural similarity between real images and synthesized images. Meanwhile, the I-SP imposes constraints on structure similarity based on perceptual loss. The cross-modality experimental results from magnetic resonance angiography (MRA) images to computed tomography angiography (CTA) images indicate that StruP-Net achieves better segmentation performance compared with other state-of-the-art methods. Furthermore, high inference efficiency demonstrates the clinical application potential of StruP-Net. The code is available at https://github.com/Mayoiuta/StruP-Net .
{"title":"Structure preservation constraints for unsupervised domain adaptation intracranial vessel segmentation.","authors":"Sizhe Zhao, Qi Sun, Jinzhu Yang, Yuliang Yuan, Yan Huang, Zhiqing Li","doi":"10.1007/s11517-024-03195-9","DOIUrl":"https://doi.org/10.1007/s11517-024-03195-9","url":null,"abstract":"<p><p>Unsupervised domain adaptation (UDA) has received interest as a means to alleviate the burden of data annotation. Nevertheless, existing UDA segmentation methods exhibit performance degradation in fine intracranial vessel segmentation tasks due to the problem of structure mismatch in the image synthesis procedure. To improve the image synthesis quality and the segmentation performance, a novel UDA segmentation method with structure preservation approaches, named StruP-Net, is proposed. The StruP-Net employs adversarial learning for image synthesis and utilizes two domain-specific segmentation networks to enhance the semantic consistency between real images and synthesized images. Additionally, two distinct structure preservation approaches, feature-level structure preservation (F-SP) and image-level structure preservation (I-SP), are proposed to alleviate the problem of structure mismatch in the image synthesis procedure. The F-SP, composed of two domain-specific graph convolutional networks (GCN), focuses on providing feature-level constraints to enhance the structural similarity between real images and synthesized images. Meanwhile, the I-SP imposes constraints on structure similarity based on perceptual loss. The cross-modality experimental results from magnetic resonance angiography (MRA) images to computed tomography angiography (CTA) images indicate that StruP-Net achieves better segmentation performance compared with other state-of-the-art methods. Furthermore, high inference efficiency demonstrates the clinical application potential of StruP-Net. The code is available at https://github.com/Mayoiuta/StruP-Net .</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In response to the challenge of low accuracy in retinal vessel segmentation attributed to the minute nature of the vessels, this paper proposes a retinal vessel segmentation model based on an improved U-Net, which combines multi-scale feature extraction and fusion techniques. An improved dilated residual module was first used to replace the original convolutional layer of U-Net, and this module, coupled with a dual attention mechanism and diverse expansion rates, facilitates the extraction of multi-scale vascular features. Moreover, an adaptive feature fusion module was added at the skip connections of the model to improve vessel connectivity. To further optimize network training, a hybrid loss function is employed to mitigate the class imbalance between vessels and the background. Experimental results on the DRIVE dataset and CHASE_DB1 dataset show that the proposed model has an accuracy of 96.27% and 96.96%, sensitivity of 81.32% and 82.59%, and AUC of 98.34% and 98.70%, respectively, demonstrating superior segmentation performance.
{"title":"A multi-scale feature extraction and fusion-based model for retinal vessel segmentation in fundus images.","authors":"Jinzhi Zhou, Guangcen Ma, Haoyang He, Saifeng Li, Guopeng Zhang","doi":"10.1007/s11517-024-03223-8","DOIUrl":"https://doi.org/10.1007/s11517-024-03223-8","url":null,"abstract":"<p><p>In response to the challenge of low accuracy in retinal vessel segmentation attributed to the minute nature of the vessels, this paper proposes a retinal vessel segmentation model based on an improved U-Net, which combines multi-scale feature extraction and fusion techniques. An improved dilated residual module was first used to replace the original convolutional layer of U-Net, and this module, coupled with a dual attention mechanism and diverse expansion rates, facilitates the extraction of multi-scale vascular features. Moreover, an adaptive feature fusion module was added at the skip connections of the model to improve vessel connectivity. To further optimize network training, a hybrid loss function is employed to mitigate the class imbalance between vessels and the background. Experimental results on the DRIVE dataset and CHASE_DB1 dataset show that the proposed model has an accuracy of 96.27% and 96.96%, sensitivity of 81.32% and 82.59%, and AUC of 98.34% and 98.70%, respectively, demonstrating superior segmentation performance.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research on degenerative spondylolisthesis (DS) has focused primarily on the biomechanical responses of pathological segments, with few studies involving muscle modelling in simulated analysis, leading to an emphasis on the back muscles in physical therapy, neglecting the ventral muscles. The purpose of this study was to quantitatively analyse the biomechanical response of the spinopelvic complex and surrounding muscle groups in DS patients using integrative modelling. The findings may aid in the development of more comprehensive rehabilitation strategies for DS patients. Two new finite element spinopelvic complex models with detailed muscles for normal spine and DS spine (L4 forwards slippage) modelling were established and validated at multiple levels. Then, the spinopelvic position parameters including peak stress of the lumbar isthmic-cortical bone, intervertebral discs, and facet joints; peak strain of the ligaments; peak force of the muscles; and percentage difference in the range of motion were analysed and compared under flexion-extension (F-E), lateral bending (LB), and axial rotation (AR) loading conditions between the two models. Compared with the normal spine model, the DS spine model exhibited greater stress and strain in adjacent biological tissues. Stress at the L4/5 disc and facet joints under AR and LB conditions was approximately 6.6 times greater in the DS spine model than in the normal model, the posterior longitudinal ligament peak strain in the normal model was 1/10 of that in the DS model, and more high-stress areas were found in the DS model, with stress notably transferring forwards. Additionally, compared with the normal spine model, the DS model exhibited greater muscle tensile forces in the lumbosacral muscle groups during F-E and LB motions. The psoas muscle in the DS model was subjected to 23.2% greater tensile force than that in the normal model. These findings indicated that L4 anterior slippage and changes in lumbosacral-pelvic alignment affect the biomechanical response of muscles. In summary, the present work demonstrated a certain level of accuracy and validity of our models as well as the differences between the models. Alterations in spondylolisthesis and the accompanying overall imbalance in the spinopelvic complex result in increased loading response levels of the functional spinal units in DS patients, creating a vicious cycle that exacerbates the imbalance in the lumbosacral region. Therefore, clinicians are encouraged to propose specific exercises for the ventral muscles, such as the psoas group, to address spinopelvic imbalance and halt the progression of DS.
{"title":"Development of a spinopelvic complex finite element model for quantitative analysis of the biomechanical response of patients with degenerative spondylolisthesis.","authors":"Ziyang Liang, Xiaowei Dai, Weisen Li, Weimei Chen, Qi Shi, Yizong Wei, Qianqian Liang, Yuanfang Lin","doi":"10.1007/s11517-024-03218-5","DOIUrl":"https://doi.org/10.1007/s11517-024-03218-5","url":null,"abstract":"<p><p>Research on degenerative spondylolisthesis (DS) has focused primarily on the biomechanical responses of pathological segments, with few studies involving muscle modelling in simulated analysis, leading to an emphasis on the back muscles in physical therapy, neglecting the ventral muscles. The purpose of this study was to quantitatively analyse the biomechanical response of the spinopelvic complex and surrounding muscle groups in DS patients using integrative modelling. The findings may aid in the development of more comprehensive rehabilitation strategies for DS patients. Two new finite element spinopelvic complex models with detailed muscles for normal spine and DS spine (L4 forwards slippage) modelling were established and validated at multiple levels. Then, the spinopelvic position parameters including peak stress of the lumbar isthmic-cortical bone, intervertebral discs, and facet joints; peak strain of the ligaments; peak force of the muscles; and percentage difference in the range of motion were analysed and compared under flexion-extension (F-E), lateral bending (LB), and axial rotation (AR) loading conditions between the two models. Compared with the normal spine model, the DS spine model exhibited greater stress and strain in adjacent biological tissues. Stress at the L4/5 disc and facet joints under AR and LB conditions was approximately 6.6 times greater in the DS spine model than in the normal model, the posterior longitudinal ligament peak strain in the normal model was 1/10 of that in the DS model, and more high-stress areas were found in the DS model, with stress notably transferring forwards. Additionally, compared with the normal spine model, the DS model exhibited greater muscle tensile forces in the lumbosacral muscle groups during F-E and LB motions. The psoas muscle in the DS model was subjected to 23.2% greater tensile force than that in the normal model. These findings indicated that L4 anterior slippage and changes in lumbosacral-pelvic alignment affect the biomechanical response of muscles. In summary, the present work demonstrated a certain level of accuracy and validity of our models as well as the differences between the models. Alterations in spondylolisthesis and the accompanying overall imbalance in the spinopelvic complex result in increased loading response levels of the functional spinal units in DS patients, creating a vicious cycle that exacerbates the imbalance in the lumbosacral region. Therefore, clinicians are encouraged to propose specific exercises for the ventral muscles, such as the psoas group, to address spinopelvic imbalance and halt the progression of DS.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Using echocardiography to assess the left ventricular function is one of the most crucial cardiac examinations in clinical diagnosis, and LV segmentation plays a particularly vital role in medical image processing as many important clinical diagnostic parameters are derived from the segmentation results, such as ejection function. However, echocardiography typically has a lower resolution and contains a significant amount of noise and motion artifacts, making it a challenge to accurate segmentation, especially in the region of the cardiac chamber boundary, which significantly restricts the accurate calculation of subsequent clinical parameters. In this paper, our goal is to achieve accurate LV segmentation through a simplified approach by introducing a branch sub-network into the decoder of the traditional U-Net. Specifically, we employed the LV contour features to supervise the branch decoding process and used a cross attention module to facilitate the interaction relationship between the branch and the original decoding process, thereby improving the segmentation performance in the region LV boundaries. In the experiments, the proposed branch U-Net (BU-Net) demonstrated superior performance on CAMUS and EchoNet-dynamic public echocardiography segmentation datasets in comparison to state-of-the-art segmentation models, without the need for complex residual connections or transformer-based architectures. Our codes are publicly available at Anonymous Github https://anonymous.4open.science/r/Anoymous_two-BFF2/ .
{"title":"Contour-constrained branch U-Net for accurate left ventricular segmentation in echocardiography.","authors":"Mingjun Qu, Jinzhu Yang, Honghe Li, Yiqiu Qi, Qi Yu","doi":"10.1007/s11517-024-03201-0","DOIUrl":"https://doi.org/10.1007/s11517-024-03201-0","url":null,"abstract":"<p><p>Using echocardiography to assess the left ventricular function is one of the most crucial cardiac examinations in clinical diagnosis, and LV segmentation plays a particularly vital role in medical image processing as many important clinical diagnostic parameters are derived from the segmentation results, such as ejection function. However, echocardiography typically has a lower resolution and contains a significant amount of noise and motion artifacts, making it a challenge to accurate segmentation, especially in the region of the cardiac chamber boundary, which significantly restricts the accurate calculation of subsequent clinical parameters. In this paper, our goal is to achieve accurate LV segmentation through a simplified approach by introducing a branch sub-network into the decoder of the traditional U-Net. Specifically, we employed the LV contour features to supervise the branch decoding process and used a cross attention module to facilitate the interaction relationship between the branch and the original decoding process, thereby improving the segmentation performance in the region LV boundaries. In the experiments, the proposed branch U-Net (BU-Net) demonstrated superior performance on CAMUS and EchoNet-dynamic public echocardiography segmentation datasets in comparison to state-of-the-art segmentation models, without the need for complex residual connections or transformer-based architectures. Our codes are publicly available at Anonymous Github https://anonymous.4open.science/r/Anoymous_two-BFF2/ .</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1007/s11517-024-03202-z
Angel Rio-Alvarez, Pablo García Marcos, Paula Puerta González, Esther Serrano-Pertierra, Antonello Novelli, M Teresa Fernández-Sánchez, Víctor M González
The counting and characterization of neurons in primary cultures have long been areas of significant scientific interest due to their multifaceted applications, ranging from neuronal viability assessment to the study of neuronal development. Traditional methods, often relying on fluorescence or colorimetric staining and manual segmentation, are time consuming, labor intensive, and prone to error, raising the need for the development of automated and reliable methods. This paper delves into the evaluation of three pivotal deep learning techniques: semantic segmentation, which allows for pixel-level classification and is solely suited for characterization; object detection, which focuses on counting and locating neurons; and instance segmentation, which amalgamates the features of the other two but employing more intricate structures. The goal of this research is to discern what technique or combination of those techniques yields the optimal results for automatic counting and characterization of neurons in images of neuronal cultures. Following rigorous experimentation, we conclude that instance segmentation stands out, providing superior outcomes for both challenges.
{"title":"Evaluating deep learning techniques for optimal neurons counting and characterization in complex neuronal cultures.","authors":"Angel Rio-Alvarez, Pablo García Marcos, Paula Puerta González, Esther Serrano-Pertierra, Antonello Novelli, M Teresa Fernández-Sánchez, Víctor M González","doi":"10.1007/s11517-024-03202-z","DOIUrl":"https://doi.org/10.1007/s11517-024-03202-z","url":null,"abstract":"<p><p>The counting and characterization of neurons in primary cultures have long been areas of significant scientific interest due to their multifaceted applications, ranging from neuronal viability assessment to the study of neuronal development. Traditional methods, often relying on fluorescence or colorimetric staining and manual segmentation, are time consuming, labor intensive, and prone to error, raising the need for the development of automated and reliable methods. This paper delves into the evaluation of three pivotal deep learning techniques: semantic segmentation, which allows for pixel-level classification and is solely suited for characterization; object detection, which focuses on counting and locating neurons; and instance segmentation, which amalgamates the features of the other two but employing more intricate structures. The goal of this research is to discern what technique or combination of those techniques yields the optimal results for automatic counting and characterization of neurons in images of neuronal cultures. Following rigorous experimentation, we conclude that instance segmentation stands out, providing superior outcomes for both challenges.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142479180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}