首页 > 最新文献

Biomedical Signal Processing and Control最新文献

英文 中文
A generative adversarial network based on deep supervision for anatomical and functional image fusion 基于深度监督的生成对抗网络,用于解剖和功能图像融合
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-30 DOI: 10.1016/j.bspc.2024.107011
Shiqiang Liu , Weisheng Li , Guofen Wang , Yuping Huang , Yin Zhang , Dan He
Medical image fusion techniques improve single-image representations by integrating salient information from medical images of different modalities. However, existing fusion methods suffer from limitations, such as vanishing gradients, blurred details, and low efficiency. To alleviate these problems, a generative adversarial network based on deep supervision (DSGAN) is proposed. First, a two-branch structure is proposed to separately extract salient information, such as texture and metabolic information, from different modal images. Self-supervised learning is performed by building a new deep supervision module to enhance effective feature extraction. The fusion and multimodal input images are then placed in the discriminator for computation. Adversarial loss based on the Earth Mover’s distance ensures that more spatial frequency, gradient, and contrast information are maintained in a fusion image, and makes model training more stable. In addition, DSGAN is an end-to-end model that does not manually set up complex fusion rules. Compared with classic fusion methods, the proposed DSGAN retains rich texture details and edge information in the input image, fuses images faster, and exhibits superior performance in objective evaluation metrics.
医学影像融合技术通过整合不同模式医学影像中的突出信息来改进单图像表征。然而,现有的融合方法存在梯度消失、细节模糊和效率低等局限性。为了缓解这些问题,我们提出了一种基于深度监督的生成对抗网络(DSGAN)。首先,提出了一种双分支结构,分别从不同模态图像中提取纹理和代谢信息等显著信息。通过建立一个新的深度监督模块来进行自监督学习,从而提高特征提取的有效性。然后将融合图像和多模态输入图像放入判别器中进行计算。基于地球移动距离的对抗损失可确保在融合图像中保留更多的空间频率、梯度和对比度信息,并使模型训练更加稳定。此外,DSGAN 是一种端到端模型,无需手动设置复杂的融合规则。与传统的融合方法相比,所提出的 DSGAN 能保留输入图像中丰富的纹理细节和边缘信息,融合图像的速度更快,在客观评价指标上表现出更优越的性能。
{"title":"A generative adversarial network based on deep supervision for anatomical and functional image fusion","authors":"Shiqiang Liu ,&nbsp;Weisheng Li ,&nbsp;Guofen Wang ,&nbsp;Yuping Huang ,&nbsp;Yin Zhang ,&nbsp;Dan He","doi":"10.1016/j.bspc.2024.107011","DOIUrl":"10.1016/j.bspc.2024.107011","url":null,"abstract":"<div><div>Medical image fusion techniques improve single-image representations by integrating salient information from medical images of different modalities. However, existing fusion methods suffer from limitations, such as vanishing gradients, blurred details, and low efficiency. To alleviate these problems, a generative adversarial network based on deep supervision (DSGAN) is proposed. First, a two-branch structure is proposed to separately extract salient information, such as texture and metabolic information, from different modal images. Self-supervised learning is performed by building a new deep supervision module to enhance effective feature extraction. The fusion and multimodal input images are then placed in the discriminator for computation. Adversarial loss based on the Earth Mover’s distance ensures that more spatial frequency, gradient, and contrast information are maintained in a fusion image, and makes model training more stable. In addition, DSGAN is an end-to-end model that does not manually set up complex fusion rules. Compared with classic fusion methods, the proposed DSGAN retains rich texture details and edge information in the input image, fuses images faster, and exhibits superior performance in objective evaluation metrics.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107011"},"PeriodicalIF":4.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Studying of deep neural networks and delta and alpha sub-bands harmony signals for Prediction of epilepsy 研究深度神经网络与德尔塔和阿尔法子波段和谐信号对癫痫的预测作用
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-29 DOI: 10.1016/j.bspc.2024.107066
G. Alizadeh, T. Yousefi Rezaii, S. Meshgini
Epilepsy, a seizure disorder, is one of the significant diseases in the global community. More than 1% of the world’s population is affected by this disease. It is controlled with medicine in a mild case. Neurologists use Electroencephalography (EEG) to diagnose epilepsy in most medical centers and hospitals. In recent years, researchers have conducted numerous studies to estimate epilepsy attacks using EEG. In this study, a new method is presented to enhance the accuracy, sensitivity, and other necessary parameters for estimating epilepsy attacks. In the proposed algorithm, the processing of brain signals is performed in two stages. In the first stage, the brain signals are decomposed into delta, theta, beta and alpha sub-bands using Discrete Wavelet Transform (DWT). Subsequently, the accuracy of the sub-bands are analyzed using a Long Short-Term Memory Neural Network (LSTM). Sub-bands with an accuracy of over 70% are selected for the second stage. In the second processing stage, selected sub-band harmonic signal images are used as input for a convolutional neural network (CNN) to extract features and make the final decision. The use of the proposed method results in an improvement in all parameters for estimating epilepsy attacks, including accuracy, sensitivity, and AUC. The results of the proposed method show a 45% increase compared to the conventional method.
癫痫是一种发作性疾病,是全球重大疾病之一。全世界有超过 1%的人口受到这种疾病的影响。轻度患者可以通过药物控制病情。大多数医疗中心和医院的神经科医生都使用脑电图(EEG)来诊断癫痫。近年来,研究人员进行了大量研究,利用脑电图估计癫痫发作。本研究提出了一种新方法,以提高估计癫痫发作的准确性、灵敏度和其他必要参数。在所提出的算法中,大脑信号的处理分两个阶段进行。在第一阶段,使用离散小波变换(DWT)将脑信号分解为 delta、theta、beta 和 alpha 子带。随后,使用长短期记忆神经网络(LSTM)分析子波段的准确性。准确率超过 70% 的子带将被选入第二阶段。在第二处理阶段,选定的子带谐波信号图像被用作卷积神经网络(CNN)的输入,以提取特征并做出最终决定。使用所提出的方法可以改善癫痫发作估计的所有参数,包括准确性、灵敏度和 AUC。与传统方法相比,建议方法的结果显示提高了 45%。
{"title":"Studying of deep neural networks and delta and alpha sub-bands harmony signals for Prediction of epilepsy","authors":"G. Alizadeh,&nbsp;T. Yousefi Rezaii,&nbsp;S. Meshgini","doi":"10.1016/j.bspc.2024.107066","DOIUrl":"10.1016/j.bspc.2024.107066","url":null,"abstract":"<div><div>Epilepsy, a seizure disorder, is one of the significant diseases in the global community. More than 1% of the world’s population is affected by this disease. It is controlled with medicine in a mild case. Neurologists use Electroencephalography (EEG) to diagnose epilepsy in most medical centers and hospitals. In recent years, researchers have conducted numerous studies to estimate epilepsy attacks using EEG. In this study, a new method is presented to enhance the accuracy, sensitivity, and other necessary parameters for estimating epilepsy attacks. In the proposed algorithm, the processing of brain signals is performed in two stages. In the first stage, the brain signals are decomposed into delta, theta, beta and alpha sub-bands using Discrete Wavelet Transform (DWT). Subsequently, the accuracy of the sub-bands are analyzed using a Long Short-Term Memory Neural Network (LSTM). Sub-bands with an accuracy of over 70% are selected for the second stage. In the second processing stage, selected sub-band harmonic signal images are used as input for a convolutional neural network (CNN) to extract features and make the final decision. The use of the proposed method results in an improvement in all parameters for estimating epilepsy attacks, including accuracy, sensitivity, and AUC. The results of the proposed method show a 45% increase compared to the conventional method.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107066"},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MFCPNet: Real time medical image segmentation network via multi-scale feature fusion and channel pruning MFCPNet:通过多尺度特征融合和通道剪枝实现实时医学图像分割网络
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-29 DOI: 10.1016/j.bspc.2024.107074
Linlin Hou , Zishen Yan , Christian Desrosiers , Hui Liu
Real-time medical image segmentation can not only enhance the interactivity and feasibility of applications but also support more medical application scenarios. Local feature extraction methods reliant on Convolutional Neural Networks (CNN) are hampered by restricted receptive fields, which weakens their ability to capture comprehensive information. Conversely, global feature extraction methods based on Transformers generally face impediments in real-time tasks due to their extensive computational demands. To address these challenges and explore accurate and real-time medical image segmentation models, we introduce this novel MFCPNet. MFCPNet begins by devising Multi-Scale Multi-Channel Convolution (MSMC Conv) to extract local features across various levels and scales. This innovative design contributes to extracting richer local information without unduly burdening the model. Second, for the enhanced receptive field of convolution and the model’s generalization capability, we introduce an Attention Block (Attn Block) carrying rotation invariance. This block, inspired by lightweight Bi-Level Routing Attention (BRA) and MLP-Mixer, effectively mitigates the constraints of convolutional structures and achieves superior contextual modeling. Finally, a judicious pruning of the channel count is employed within MFCPNet, striking a trade-off between segmentation accuracy and efficiency. To evaluate the proposed method, we compare it with several classic approaches using three different types of datasets: retinal images, brain scans, and colon polyps. Across these datasets, MFCPNet achieves segmentation performance comparable to existing methods, with a computational cost of 2.2G FLOPs and 0.49M parameters. Furthermore, it demonstrates a processing speed of 79.54 FPS, meeting the requirements for real-time applications.
实时医学图像分割不仅能增强应用的交互性和可行性,还能支持更多的医学应用场景。依赖卷积神经网络(CNN)的局部特征提取方法受到感受野的限制,从而削弱了其捕捉综合信息的能力。相反,基于变形器的全局特征提取方法由于计算量大,在实时任务中普遍面临障碍。为了应对这些挑战,探索准确、实时的医学图像分割模型,我们引入了新颖的 MFCPNet。MFCPNet 首先设计了多尺度多通道卷积(MSMC Conv),以提取不同层次和尺度的局部特征。这一创新设计有助于提取更丰富的局部信息,同时又不会给模型带来不必要的负担。其次,为了增强卷积的感受野和模型的泛化能力,我们引入了具有旋转不变性的注意力区块(Attention Block,Attn Block)。受轻量级双层路由注意力(BRA)和 MLP-Mixer 的启发,这个区块有效地减轻了卷积结构的限制,实现了卓越的上下文建模。最后,在 MFCPNet 中采用了明智的通道数修剪方法,在分割准确性和效率之间实现了权衡。为了评估所提出的方法,我们使用三种不同类型的数据集:视网膜图像、大脑扫描和结肠息肉,将其与几种经典方法进行了比较。在这些数据集中,MFCPNet 以 2.2G FLOPs 的计算成本和 0.49M 的参数实现了与现有方法相当的分割性能。此外,它的处理速度达到 79.54 FPS,满足了实时应用的要求。
{"title":"MFCPNet: Real time medical image segmentation network via multi-scale feature fusion and channel pruning","authors":"Linlin Hou ,&nbsp;Zishen Yan ,&nbsp;Christian Desrosiers ,&nbsp;Hui Liu","doi":"10.1016/j.bspc.2024.107074","DOIUrl":"10.1016/j.bspc.2024.107074","url":null,"abstract":"<div><div>Real-time medical image segmentation can not only enhance the interactivity and feasibility of applications but also support more medical application scenarios. Local feature extraction methods reliant on Convolutional Neural Networks (CNN) are hampered by restricted receptive fields, which weakens their ability to capture comprehensive information. Conversely, global feature extraction methods based on Transformers generally face impediments in real-time tasks due to their extensive computational demands. To address these challenges and explore accurate and real-time medical image segmentation models, we introduce this novel MFCPNet. MFCPNet begins by devising Multi-Scale Multi-Channel Convolution (MSMC Conv) to extract local features across various levels and scales. This innovative design contributes to extracting richer local information without unduly burdening the model. Second, for the enhanced receptive field of convolution and the model’s generalization capability, we introduce an Attention Block (Attn Block) carrying rotation invariance. This block, inspired by lightweight Bi-Level Routing Attention (BRA) and MLP-Mixer, effectively mitigates the constraints of convolutional structures and achieves superior contextual modeling. Finally, a judicious pruning of the channel count is employed within MFCPNet, striking a trade-off between segmentation accuracy and efficiency. To evaluate the proposed method, we compare it with several classic approaches using three different types of datasets: retinal images, brain scans, and colon polyps. Across these datasets, MFCPNet achieves segmentation performance comparable to existing methods, with a computational cost of 2.2G FLOPs and 0.49M parameters. Furthermore, it demonstrates a processing speed of 79.54 FPS, meeting the requirements for real-time applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107074"},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep-learning based fusion of spatial relationship classification between mandibular third molar and inferior alveolar nerve using panoramic radiograph images 基于深度学习的下颌第三磨牙与下牙槽神经空间关系分类融合(使用全景放射摄影图像
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-29 DOI: 10.1016/j.bspc.2024.107059
Nida Kumbasar , Mustafa Taha Güller , Özkan Miloğlu , Emin Argun Oral , Ibrahim Yucel Ozbek
It is crucial for clinicians to have a prior knowledge of spatial relationship between impacted mandibular third molar tooth (MM3) and inferior alveolar nerve (IAN) before an extraction procedure. This relationship may exist in four spatial forms in terms of IAN position relative to MM3 although it has not been studied extensively. To identify such relationship type, on the other hand, this study proposes a novel four-class classification framework utilizing fusion of AlexNet, VGG16, VGG19 deep learning methods using panoramic radiograph (PR) images. For this purpose, 546 PR images of impacted MM3, collected from 290 patients, were labeled by specialists using corresponding cone beam computed tomography (CBCT) images. The proposed network is trained and tested using 10 folds cross validation. Experimental studies were performed in different categories. In the first (MM3 and IAN are related/unrelated) an accuracy rate of 94.1% was obtained. In the following IAN resides on the lingual or vestibule (buccal) side of MM3 classification problem, test result of 80.6% accuracy was obtained. Finally, in the challenging four-class classification problem that includes unrelated, lingual, vestibule and other classes, an accuracy rate of 79.7% was achieved. Obtained results show that the proposed method not only presents state-of-the-art results but also suggests a new classification basis for the existing MM3-IAN relationship problem.
对于临床医生来说,在拔牙手术前事先了解下颌第三磨牙(MM3)和下牙槽神经(IAN)之间的空间关系至关重要。这种关系可能存在于四种空间形式中,即 IAN 相对于 MM3 的位置,尽管尚未进行广泛研究。另一方面,为了识别这种关系类型,本研究提出了一种新颖的四级分类框架,利用全景X光片(PR)图像融合 AlexNet、VGG16、VGG19 深度学习方法。为此,专家使用相应的锥形束计算机断层扫描(CBCT)图像对从 290 名患者处收集到的 546 幅影响 MM3 的 PR 图像进行了标记。提出的网络通过 10 次交叉验证进行训练和测试。实验研究按不同类别进行。第一类(MM3 和 IAN 相关/不相关)的准确率为 94.1%。在接下来的 IAN 位于 MM3 分类问题的舌侧或前庭(颊侧)时,测试结果的准确率为 80.6%。最后,在具有挑战性的四类分类问题(包括无关类、舌侧类、前庭类和其他类)中,准确率达到了 79.7%。结果表明,所提出的方法不仅呈现了最先进的结果,还为现有的 MM3-IAN 关系问题提出了新的分类基础。
{"title":"Deep-learning based fusion of spatial relationship classification between mandibular third molar and inferior alveolar nerve using panoramic radiograph images","authors":"Nida Kumbasar ,&nbsp;Mustafa Taha Güller ,&nbsp;Özkan Miloğlu ,&nbsp;Emin Argun Oral ,&nbsp;Ibrahim Yucel Ozbek","doi":"10.1016/j.bspc.2024.107059","DOIUrl":"10.1016/j.bspc.2024.107059","url":null,"abstract":"<div><div>It is crucial for clinicians to have a prior knowledge of spatial relationship between impacted mandibular third molar tooth (MM3) and inferior alveolar nerve (IAN) before an extraction procedure. This relationship may exist in four spatial forms in terms of IAN position relative to MM3 although it has not been studied extensively. To identify such relationship type, on the other hand, this study proposes a novel four-class classification framework utilizing fusion of AlexNet, VGG16, VGG19 deep learning methods using panoramic radiograph (PR) images. For this purpose, 546 PR images of impacted MM3, collected from 290 patients, were labeled by specialists using corresponding cone beam computed tomography (CBCT) images. The proposed network is trained and tested using 10 folds cross validation. Experimental studies were performed in different categories. In the first (MM3 and IAN are related/unrelated) an accuracy rate of 94.1% was obtained. In the following IAN resides on the lingual or vestibule (buccal) side of MM3 classification problem, test result of 80.6% accuracy was obtained. Finally, in the challenging four-class classification problem that includes unrelated, lingual, vestibule and other classes, an accuracy rate of 79.7% was achieved. Obtained results show that the proposed method not only presents state-of-the-art results but also suggests a new classification basis for the existing MM3-IAN relationship problem.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107059"},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DSU-Net: Dual-Stage U-Net based on CNN and Transformer for skin lesion segmentation DSU-Net:基于 CNN 和变换器的双级 U-Net 用于皮损分割
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-29 DOI: 10.1016/j.bspc.2024.107090
Longwei Zhong , Tiansong Li , Meng Cui , Shaoguo Cui , Hongkui Wang , Li Yu
Precise delineation of skin lesions from dermoscopy pictures is crucial for enhancing the quantitative analysis of melanoma. However, this remains a difficult endeavor due to inherent characteristics such as large variability in lesion size, form, and fuzzy boundaries. In recent years, CNNs and Transformers have indicated notable benefits in the area of skin lesion segmentation. Hence, we first propose the DSU-Net segmentation network, which is inspired by the manual segmentation process. Through the coordination mechanism of the two segmentation sub-networks, the simulation of a process occurs where the lesion area is initially coarsely identified and then meticulously delineated. Then, we propose a two-stage balanced loss function to better simulate the manual segmentation process by adaptively controlling the loss weight. Further, we introduce a multi-feature fusion module, which combines various feature extraction modules to extract richer feature information, refine the lesion area, and obtain accurate segmentation boundaries. Finally, we conducted extensive experiments on the ISIC2017, ISIC2018, and PH2 datasets to assess and validate the efficacy of the DSU-Net by comparing it to the most advanced approaches currently available. The codes are available at https://github.com/ZhongLongwei/DSU-Net.
从皮肤镜图片中精确划分皮肤病变对于加强黑色素瘤的定量分析至关重要。然而,由于皮损的大小、形态和模糊边界等固有特征存在较大差异,这仍然是一项艰巨的任务。近年来,CNN 和变换器在皮损分割领域取得了显著成效。因此,我们首先提出了 DSU-Net 分割网络,其灵感来源于人工分割过程。通过两个分割子网络的协调机制,模拟出一个皮损区域从粗略识别到细致划分的过程。然后,我们提出了一个两阶段平衡损失函数,通过自适应控制损失权重来更好地模拟人工分割过程。此外,我们还引入了多特征融合模块,结合各种特征提取模块,提取更丰富的特征信息,细化病变区域,获得准确的分割边界。最后,我们在 ISIC2017、ISIC2018 和 PH2 数据集上进行了大量实验,通过与目前最先进的方法进行比较,评估和验证了 DSU-Net 的功效。代码见 https://github.com/ZhongLongwei/DSU-Net。
{"title":"DSU-Net: Dual-Stage U-Net based on CNN and Transformer for skin lesion segmentation","authors":"Longwei Zhong ,&nbsp;Tiansong Li ,&nbsp;Meng Cui ,&nbsp;Shaoguo Cui ,&nbsp;Hongkui Wang ,&nbsp;Li Yu","doi":"10.1016/j.bspc.2024.107090","DOIUrl":"10.1016/j.bspc.2024.107090","url":null,"abstract":"<div><div>Precise delineation of skin lesions from dermoscopy pictures is crucial for enhancing the quantitative analysis of melanoma. However, this remains a difficult endeavor due to inherent characteristics such as large variability in lesion size, form, and fuzzy boundaries. In recent years, CNNs and Transformers have indicated notable benefits in the area of skin lesion segmentation. Hence, we first propose the DSU-Net segmentation network, which is inspired by the manual segmentation process. Through the coordination mechanism of the two segmentation sub-networks, the simulation of a process occurs where the lesion area is initially coarsely identified and then meticulously delineated. Then, we propose a two-stage balanced loss function to better simulate the manual segmentation process by adaptively controlling the loss weight. Further, we introduce a multi-feature fusion module, which combines various feature extraction modules to extract richer feature information, refine the lesion area, and obtain accurate segmentation boundaries. Finally, we conducted extensive experiments on the ISIC2017, ISIC2018, and PH2 datasets to assess and validate the efficacy of the DSU-Net by comparing it to the most advanced approaches currently available. The codes are available at <span><span>https://github.com/ZhongLongwei/DSU-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107090"},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An automatic segmentation of calcified tissue in forward-looking intravascular ultrasound images 自动分割前视血管内超声图像中的钙化组织
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-28 DOI: 10.1016/j.bspc.2024.107095
Ziyu Cui, Zhaoju Zhu, Peiwen Huang, Chuhang Gao, Bingwei He
The assessment of images of the coronary artery system plays a crucial part in the diagnosis and treatment of cardiovascular diseases (CVD). Forward-looking intravascular ultrasound (FL-IVUS) has a distinct advantage in assessing CVD due to its superior resolution and imaging capability, especially in severe calcification scenarios. The demarcation of the lumen and media-adventitia, as well as the identification of calcified tissue information, constitute the initial steps in assessing of CVD such as atherosclerosis using FL-IVUS images. In this research, we introduced a novel approach for automated lumen segmentation and identification of calcified tissue in FL-IVUS images. The proposed method utilizes superpixel segmentation and fuzzy C-means clustering (FCM) to identify regions that potentially correspond to lumina. Furthermore, connected component labeling and active contour methods are employed to refine the contours of lumina. To handle the distinctive depth information found in FL-IVUS images, ellipse fitting and region detectors are applied to identify areas with calcified tissue. In our dataset consisting of 43 FL-IVUS images, this method achieved mean values for Jaccard measure, Dice coefficient, Hausdorff distance, and percentage area difference at 0.952 ± 0.016, 0.975 ± 0.008, 0.296 ± 0.186, and 0.019 ± 0.010, respectively. Furthermore, when compared with traditional segmentation approaches, the proposed approach yields higher images quality. The test results demonstrate the effectiveness of this innovative automated segmentation technique for detecting the lumina and calcified tissue in FL-IVUS images.
冠状动脉系统图像的评估在心血管疾病(CVD)的诊断和治疗中起着至关重要的作用。前瞻性血管内超声(FL-IVUS)因其卓越的分辨率和成像能力,在评估心血管疾病方面具有明显优势,尤其是在严重钙化的情况下。使用前视血管内超声图像评估动脉粥样硬化等心血管疾病的最初步骤是划分管腔和介质-内膜,以及识别钙化组织信息。在这项研究中,我们引入了一种新方法,用于在 FL-IVUS 图像中自动分割管腔和识别钙化组织。该方法利用超像素分割和模糊 C-means 聚类(FCM)来识别可能与管腔相对应的区域。此外,还采用了连接分量标记和主动轮廓方法来细化腔隙的轮廓。为了处理 FL-IVUS 图像中的独特深度信息,我们采用了椭圆拟合和区域检测器来识别钙化组织区域。在由 43 张 FL-IVUS 图像组成的数据集中,该方法的 Jaccard 测量、Dice 系数、Hausdorff 距离和面积差异百分比的平均值分别为 0.952 ± 0.016、0.975 ± 0.008、0.296 ± 0.186 和 0.019 ± 0.010。此外,与传统的分割方法相比,所提出的方法能获得更高的图像质量。测试结果证明了这种创新的自动分割技术在检测 FL-IVUS 图像中的管腔和钙化组织方面的有效性。
{"title":"An automatic segmentation of calcified tissue in forward-looking intravascular ultrasound images","authors":"Ziyu Cui,&nbsp;Zhaoju Zhu,&nbsp;Peiwen Huang,&nbsp;Chuhang Gao,&nbsp;Bingwei He","doi":"10.1016/j.bspc.2024.107095","DOIUrl":"10.1016/j.bspc.2024.107095","url":null,"abstract":"<div><div>The assessment of images of the coronary artery system plays a crucial part in the diagnosis and treatment of cardiovascular diseases (CVD). Forward-looking intravascular ultrasound (FL-IVUS) has a distinct advantage in assessing CVD due to its superior resolution and imaging capability, especially in severe calcification scenarios. The demarcation of the lumen and media-adventitia, as well as the identification of calcified tissue information, constitute the initial steps in assessing of CVD such as atherosclerosis using FL-IVUS images. In this research, we introduced a novel approach for automated lumen segmentation and identification of calcified tissue in FL-IVUS images. The proposed method utilizes superpixel segmentation and fuzzy C-means clustering (FCM) to identify regions that potentially correspond to lumina. Furthermore, connected component labeling and active contour methods are employed to refine the contours of lumina. To handle the distinctive depth information found in FL-IVUS images, ellipse fitting and region detectors are applied to identify areas with calcified tissue. In our dataset consisting of 43 FL-IVUS images, this method achieved mean values for Jaccard measure, Dice coefficient, Hausdorff distance, and percentage area difference at 0.952 ± 0.016, 0.975 ± 0.008, 0.296 ± 0.186, and 0.019 ± 0.010, respectively. Furthermore, when compared with traditional segmentation approaches, the proposed approach yields higher images quality. The test results demonstrate the effectiveness of this innovative automated segmentation technique for detecting the lumina and calcified tissue in FL-IVUS images.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107095"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep neural network model for diagnosing diabetic retinopathy detection: An efficient mechanism for diabetic management 用于诊断糖尿病视网膜病变检测的深度神经网络模型:糖尿病管理的高效机制
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-28 DOI: 10.1016/j.bspc.2024.107035
Dharmalingam Muthusamy, Parimala Palani
Diabetic retinopathy (DR) is a common eye disease and a notable starting point of blindness in diabetic patients. Detecting the existence of a microaneurysm in the fundus images and the identification of DR in the preliminary stage has been a considerable question for decades. Systematic screening and interference are the most efficient mechanisms for disease management. The sizeable populations of diabetic patients and their enormous screening requirements have given rise to the computer-aided and automatic diagnosis of DR. The utilization of Deep Neural Networks in DR diagnosis has also attracted much attention and considerable advancement has been made. Diabetic retinopathy (DR) includes sensitivity and specificity that are particular to the Diabetic tested (see section on probabilistic reasoning). Even if a test is performed correctly, there is a chance for a false positive or false negative result. However, despite the several advancements that have been made, there remains room for improvement in the sensitivity and specificity of the DR diagnosis. In this work, a novel method called the Luminosity Normalized Symmetric Deep Convolute Tubular Classifier (LN-SDCTC) for DR detection is proposed. The LN-SDCTC method is split into two parts. Initially, with the retinal color fundus images as input, the Luminosity Normalized Retinal Color Fundus Preprocessing model applies to produce a noise-minimized enhanced contrast image. Second, we provide the get-processed image as input to the Symmetric Deep Convolute network. Here, with the aid of the convolutional layer (i.e., the Tubular Neighborhood Window), the average pooling layer (i.e., average magnitude value of tubular neighbors), and the max-pooling layer (i.e., maximum contrast orientation), relevant features are selected. Finally, with the extracted features as input and with the aid of the Multinomial Regression Classification function, the severity of the DR disease is determined. Extensive experimental results in terms of peak signal-to-noise ratio, disease detection time, sensitivity, and specificity reveal that the proposed method of DR detection greatly facilitates the deep learning model and yields better results than various state-of-art methods.
糖尿病视网膜病变(DR)是一种常见的眼病,也是糖尿病患者失明的显著起因。几十年来,在眼底图像中检测微动脉瘤的存在以及在初期阶段识别糖尿病视网膜病变一直是一个相当大的问题。系统筛查和干扰是最有效的疾病管理机制。庞大的糖尿病患者群体及其巨大的筛查需求催生了 DR 的计算机辅助自动诊断。深度神经网络在糖尿病视网膜病变诊断中的应用也备受关注,并取得了长足的进步。糖尿病视网膜病变(DR)包括敏感性和特异性,而敏感性和特异性是测试的糖尿病患者所特有的(见概率推理部分)。即使检测结果正确,也有可能出现假阳性或假阴性结果。然而,尽管已经取得了一些进步,但 DR 诊断的灵敏度和特异性仍有待提高。本研究提出了一种用于 DR 检测的新方法,即光度归一化对称深度卷积管状分类器(LN-SDCTC)。LN-SDCTC 方法分为两部分。首先,以视网膜彩色眼底图像为输入,应用亮度归一化视网膜彩色眼底预处理模型生成噪声最小化的增强对比度图像。其次,我们将处理后的图像作为对称深度卷积网络的输入。在此,借助卷积层(即管状邻域窗口)、平均池化层(即管状邻域的平均幅度值)和最大池化层(即最大对比度方向),选择相关特征。最后,将提取的特征作为输入,借助多项式回归分类功能,确定 DR 疾病的严重程度。从峰值信噪比、疾病检测时间、灵敏度和特异性等方面的大量实验结果表明,所提出的 DR 检测方法极大地促进了深度学习模型的发展,并产生了比各种先进方法更好的结果。
{"title":"Deep neural network model for diagnosing diabetic retinopathy detection: An efficient mechanism for diabetic management","authors":"Dharmalingam Muthusamy,&nbsp;Parimala Palani","doi":"10.1016/j.bspc.2024.107035","DOIUrl":"10.1016/j.bspc.2024.107035","url":null,"abstract":"<div><div>Diabetic retinopathy (DR) is a common eye disease and a notable starting point of blindness in diabetic patients. Detecting the existence of a microaneurysm in the fundus images and the identification of DR in the preliminary stage has been a considerable question for decades. Systematic screening and interference are the most efficient mechanisms for disease management. The sizeable populations of diabetic patients and their enormous screening requirements have given rise to the computer-aided and automatic diagnosis of DR. The utilization of Deep Neural Networks in DR diagnosis has also attracted much attention and considerable advancement has been made. Diabetic retinopathy (DR) includes sensitivity and specificity that are particular to the Diabetic tested (see section on probabilistic reasoning). Even if a test is performed correctly, there is a chance for a false positive or false negative result. However, despite the several advancements that have been made, there remains room for improvement in the sensitivity and specificity of the DR diagnosis. In this work, a novel method called the Luminosity Normalized Symmetric Deep Convolute Tubular Classifier (LN-SDCTC) for DR detection is proposed. The LN-SDCTC method is split into two parts. Initially, with the retinal color fundus images as input, the Luminosity Normalized Retinal Color Fundus Preprocessing model applies to produce a noise-minimized enhanced contrast image. Second, we provide the get-processed image as input to the Symmetric Deep Convolute network. Here, with the aid of the convolutional layer (i.e., the Tubular Neighborhood Window), the average pooling layer (i.e., average magnitude value of tubular neighbors), and the max-pooling layer (i.e., maximum contrast orientation), relevant features are selected. Finally, with the extracted features as input and with the aid of the Multinomial Regression Classification function, the severity of the DR disease is determined. Extensive experimental results in terms of peak signal-to-noise ratio, disease detection time, sensitivity, and specificity reveal that the proposed method of DR detection greatly facilitates the deep learning model and yields better results than various state-of-art methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107035"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic skin tumor detection in dermoscopic samples using Online Patch Fuzzy Region Based Segmentation 使用基于在线补丁模糊区域分割技术自动检测皮肤镜样本中的皮肤肿瘤
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-28 DOI: 10.1016/j.bspc.2024.107096
A. Ashwini , T Sahila , A. Radhakrishnan , M. Vanitha , G. Irin Loretta
Skin tumor detection and classification have an important role which is applied in the field of research, particularly in the field of medical diagnosis. The classification of tumors in skin cells is of more significance since the number of affected people is increasing. The focus of this research work is to come up with a new and efficient method of enhancing skin images as well as identifying tumors from other areas on computed tomographic skin images. This work is mainly concerned with medical application methods on computed tomography (CT) skin tumor images that are developed and applied effectively. The first step is acquiring images. It can be seen that the Boosted Notch Diffusion Filtering − Mean Pixel Histogram Equalization (BNDF-MPHE) algorithm serves as the preprocessing step within the context of the presented model. The proposed step involves Superpixel Contour Metric Segment Clustering (SCMSC) followed by an Online Patch Fuzzy Region Based Segmentation (OPFRBS) Algorithm for effective segmentation of the skin tumor cells with an accuracy of 99.25% for benign and 97.39% for malignant tumors respectively. The time required for processing the lesion is less than 2 sec. The proposed method uses MATLAB 2024a workbench and accuracy is quite higher compared with other existing algorithms for both benign and malignant samples respectively. The proposed research methodology has been validated with real-time clinical samples effectively and throws light on the patient’s life to resume normalcy and live long.
皮肤肿瘤的检测和分类在研究领域,尤其是医学诊断领域具有重要作用。由于患病人数不断增加,对皮肤细胞中的肿瘤进行分类就显得尤为重要。这项研究工作的重点是提出一种新的、高效的方法来增强皮肤图像,并从计算机断层扫描皮肤图像上的其他部位识别肿瘤。这项工作主要涉及在计算机断层扫描(CT)皮肤肿瘤图像上开发并有效应用的医疗应用方法。第一步是获取图像。可以看出,提升凹槽扩散滤波--平均像素直方图均衡化(BNDF-MPHE)算法是本模型中的预处理步骤。建议的步骤包括超像素轮廓度量分割聚类(SCMSC),然后是基于在线补丁模糊区域分割(OPFRBS)算法,以有效分割皮肤肿瘤细胞,良性肿瘤和恶性肿瘤的准确率分别为 99.25% 和 97.39%。处理病变所需的时间小于 2 秒。提出的方法使用 MATLAB 2024a 工作台,与其他现有算法相比,良性和恶性样本的准确率都相当高。所提出的研究方法已在实时临床样本中得到有效验证,并为患者恢复正常生活和长寿带来了曙光。
{"title":"Automatic skin tumor detection in dermoscopic samples using Online Patch Fuzzy Region Based Segmentation","authors":"A. Ashwini ,&nbsp;T Sahila ,&nbsp;A. Radhakrishnan ,&nbsp;M. Vanitha ,&nbsp;G. Irin Loretta","doi":"10.1016/j.bspc.2024.107096","DOIUrl":"10.1016/j.bspc.2024.107096","url":null,"abstract":"<div><div>Skin tumor detection and classification have an important role which is applied in the field of research, particularly in the field of medical diagnosis. The classification of tumors in skin cells is of more significance since the number of affected people is increasing. The focus of this research work is to come up with a new and efficient method of enhancing skin images as well as identifying tumors from other areas on computed tomographic skin images. This work is mainly concerned with medical application methods on computed tomography (CT) skin tumor images that are developed and applied effectively. The first step is acquiring images. It can be seen that the Boosted Notch Diffusion Filtering − Mean Pixel Histogram Equalization (BNDF-MPHE) algorithm serves as the preprocessing step within the context of the presented model. The proposed step involves Superpixel Contour Metric Segment Clustering (SCMSC) followed by an Online Patch Fuzzy Region Based Segmentation (OPFRBS) Algorithm for effective segmentation of the skin tumor cells with an accuracy of 99.25% for benign and 97.39% for malignant tumors respectively. The time required for processing the lesion is less than 2 sec. The proposed method uses MATLAB 2024a workbench and accuracy is quite higher compared with other existing algorithms for both benign and malignant samples respectively. The proposed research methodology has been validated with real-time clinical samples effectively and throws light on the patient’s life to resume normalcy and live long.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107096"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative study between laser speckle contrast imaging in transmission and reflection modes by adaptive window space direction contrast algorithm 利用自适应窗口空间方向对比算法对透射和反射模式下的激光斑点对比成像进行比较研究
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-28 DOI: 10.1016/j.bspc.2024.107091
Guang Han , De Li , Jixin Yuan , Jie Lu , Jun Zhang , Huiquan Wang , Ruijuan Chen , Yifan Wu
Blood flow visualization is of paramount importance in diagnosing and treating vascular diseases. Laser speckle contrast imaging (LSCI) is a widely utilized technique for visualizing blood flow. However, Reflect-laser speckle contrast imaging (R-LSCI) systems are limited in their imaging depth and primarily suitable for shallow blood flow imaging. In this study, we conducted a comparative analysis of Transmissive-laser speckle contrast imaging (T-LSCI) and R-LSCI using four spatial domain imaging methods: spatial contrast (sK), adaptive window contrast (awK), space-directional contrast (sdK), and adaptive window space direction contrast (awsdK), for deep blood flow imaging. Experimental results show that T-LSCI is superior to R-LSCI in imaging deep blood flow within a certain thickness of tissue. T-LSCI can be used for continuous non-invasive blood flow monitoring. Particularly, the utilization of the awsdK method in T-LSCI substantially improves the visualization of deep blood flow and enhances the ability to monitor blood flow variations.
血流可视化对诊断和治疗血管疾病至关重要。激光斑点对比成像(LSCI)是一种广泛应用的血流可视化技术。然而,反射激光斑点对比成像(R-LSCI)系统的成像深度有限,主要适用于浅层血流成像。在这项研究中,我们使用四种空间域成像方法:空间对比度(sK)、自适应窗口对比度(awK)、空间方向对比度(sdK)和自适应窗口空间方向对比度(awsdK),对透射激光斑点对比成像(T-LSCI)和 R-LSCI 进行了对比分析,以用于深层血流成像。实验结果表明,T-LSCI 在一定厚度组织内的深部血流成像方面优于 R-LSCI。T-LSCI 可用于连续无创血流监测。特别是在 T-LSCI 中使用 awsdK 方法,大大改善了深部血流的可视化,提高了监测血流变化的能力。
{"title":"A comparative study between laser speckle contrast imaging in transmission and reflection modes by adaptive window space direction contrast algorithm","authors":"Guang Han ,&nbsp;De Li ,&nbsp;Jixin Yuan ,&nbsp;Jie Lu ,&nbsp;Jun Zhang ,&nbsp;Huiquan Wang ,&nbsp;Ruijuan Chen ,&nbsp;Yifan Wu","doi":"10.1016/j.bspc.2024.107091","DOIUrl":"10.1016/j.bspc.2024.107091","url":null,"abstract":"<div><div>Blood flow visualization is of paramount importance in diagnosing and treating vascular diseases. Laser speckle contrast imaging (LSCI) is a widely utilized technique for visualizing blood flow. However, Reflect-laser speckle contrast imaging (R-LSCI) systems are limited in their imaging depth and primarily suitable for shallow blood flow imaging. In this study, we conducted a comparative analysis of Transmissive-laser speckle contrast imaging (T-LSCI) and R-LSCI using four spatial domain imaging methods: spatial contrast (sK), adaptive window contrast (awK), space-directional contrast (sdK), and adaptive window space direction contrast (awsdK), for deep blood flow imaging. Experimental results show that T-LSCI is superior to R-LSCI in imaging deep blood flow within a certain thickness of tissue. T-LSCI can be used for continuous non-invasive blood flow monitoring. Particularly, the utilization of the awsdK method in T-LSCI substantially improves the visualization of deep blood flow and enhances the ability to monitor blood flow variations.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107091"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TLIR: Two-layer iterative refinement model for limited-angle CT reconstruction TLIR:用于有限角度 CT 重建的双层迭代细化模型
IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-28 DOI: 10.1016/j.bspc.2024.107058
Qing Li , Tao Wang , RunRui Li , Yan Qiang , Bin Zhang , Jijie Sun , JuanJuan Zhao , Wei Wu
Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). In practical applications, due to the limited scanning angles available for fixed scan targets and the patient’s ability to tolerate radiation, complete projection data are usually not available, and images reconstructed by conventional analytical iterative methods can suffer from severe structural distortion and tilt artefacts. In this paper, we propose a deep iterative model called TLIR to recover the structural details of the missing parts of the limited angle CT images and reconstruct high quality CT images from them. Specifically, we adapt the denoising diffusion probability model to conditional image generation for the image domain recovery problem, where the model output starts from noise-blended limited-angle CT images and iteratively refines the output images using residuals U-Net trained at various noise level data. In addition, considering that the deep model corrupts the sampled part of the sinusoidal data during inference, we propose a learnable data fidelity module called DSEM to balance the data domain exchange loss and inference information loss. The two modules are executed alternately to form our two-layer iterative refinement model. The two-layer iterative structure also makes the network more robust during training and inference. TLIR shows strong reconstruction performance at different limited angles, and shows highly competitive results in all image evaluation metrics. The model proposed in this paper is open source at https://github.com/JinxTao/TLIR/tree/master.
有限角度重建是计算机断层扫描(CT)中一个典型的难题。在实际应用中,由于固定扫描目标的扫描角度有限以及患者对辐射的耐受能力,通常无法获得完整的投影数据,而采用传统分析迭代法重建的图像会出现严重的结构失真和倾斜伪影。在本文中,我们提出了一种名为 TLIR 的深度迭代模型,用于恢复有限角度 CT 图像缺失部分的结构细节,并从中重建高质量的 CT 图像。具体来说,我们将去噪扩散概率模型应用于图像域恢复问题的条件图像生成,模型输出从噪声混合的有限角度 CT 图像开始,使用在不同噪声水平数据下训练的残差 U-Net 迭代完善输出图像。此外,考虑到深度模型在推理过程中会破坏正弦数据的采样部分,我们提出了一个名为 DSEM 的可学习数据保真度模块,以平衡数据域交换损失和推理信息损失。这两个模块交替执行,形成我们的双层迭代细化模型。双层迭代结构也使网络在训练和推理过程中更加稳健。TLIR 在不同的有限角度下都表现出很强的重构性能,在所有图像评价指标中都显示出很强的竞争力。本文提出的模型开源于 https://github.com/JinxTao/TLIR/tree/master。
{"title":"TLIR: Two-layer iterative refinement model for limited-angle CT reconstruction","authors":"Qing Li ,&nbsp;Tao Wang ,&nbsp;RunRui Li ,&nbsp;Yan Qiang ,&nbsp;Bin Zhang ,&nbsp;Jijie Sun ,&nbsp;JuanJuan Zhao ,&nbsp;Wei Wu","doi":"10.1016/j.bspc.2024.107058","DOIUrl":"10.1016/j.bspc.2024.107058","url":null,"abstract":"<div><div>Limited angle reconstruction is a typical ill-posed problem in computed tomography (CT). In practical applications, due to the limited scanning angles available for fixed scan targets and the patient’s ability to tolerate radiation, complete projection data are usually not available, and images reconstructed by conventional analytical iterative methods can suffer from severe structural distortion and tilt artefacts. In this paper, we propose a deep iterative model called TLIR to recover the structural details of the missing parts of the limited angle CT images and reconstruct high quality CT images from them. Specifically, we adapt the denoising diffusion probability model to conditional image generation for the image domain recovery problem, where the model output starts from noise-blended limited-angle CT images and iteratively refines the output images using residuals U-Net trained at various noise level data. In addition, considering that the deep model corrupts the sampled part of the sinusoidal data during inference, we propose a learnable data fidelity module called DSEM to balance the data domain exchange loss and inference information loss. The two modules are executed alternately to form our two-layer iterative refinement model. The two-layer iterative structure also makes the network more robust during training and inference. TLIR shows strong reconstruction performance at different limited angles, and shows highly competitive results in all image evaluation metrics. The model proposed in this paper is open source at <span><span>https://github.com/JinxTao/TLIR/tree/master</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107058"},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Biomedical Signal Processing and Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1