首页 > 最新文献

Medical & Biological Engineering & Computing最新文献

英文 中文
Mark3D - A semi-automated open-source toolbox for 3D head- surface reconstruction and electrode position registration using a smartphone camera video. Mark3D - 利用智能手机摄像头视频进行三维头表面重建和电极位置注册的半自动化开源工具箱。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-07 DOI: 10.1007/s11517-024-03228-3
Suranjita Ganguly, Malaaika Mihir Chhaya, Ankita Jain, Aditya Koppula, Mohan Raghavan, Kousik Sarathy Sridharan

Source localization in EEG necessitates co-registering the EEG sensor locations with the subject's MRI, where EEG sensor locations are typically captured using electromagnetic tracking or 3D scanning of the subject's head with EEG cap, using commercially available 3D scanners. Both methods have drawbacks, where, electromagnetic tracking is slow and immobile, while 3D scanners are expensive. Photogrammetry offers a cost-effective alternative but requires multiple photos to sample the head, with good spatial sampling to adequately reconstruct the head surface. Post-reconstruction, the existing tools for electrode position labelling on the 3D head-surface have limited visual feedback and do not easily accommodate customized montages, which are typical in multi-modal measurements. We introduce Mark3D, an open-source, integrated tool for 3D head-surface reconstruction from phone camera video. It eliminates the need for keeping track of spatial sampling during image capture for video-based photogrammetry reconstruction. It also includes blur detection algorithms, a user-friendly interface for electrode and tracking, and integrates with popular toolboxes such as FieldTrip and MNE Python. The accuracy of the proposed method was benchmarked with the head-surface derived from a commercially available handheld 3D scanner Einscan-Pro + (Shining 3D Inc.,) which we treat as the "ground truth". We used reconstructed head-surfaces of ground truth (G1) and phone camera video (M1080) to mark the EEG electrode locations in 3D space using a dedicated UI provided in the tool. The electrode locations were then used to form pseudo-specific MRI templates for individual subjects to reconstruct source information. Somatosensory source activations in response to vibrotactile stimuli were estimated and compared between G1 and M1080. The mean positional errors of the EEG electrodes between G1 and M1080 in 3D space were found to be 0.09 ± 0.01 mm across different cortical areas, with temporal and occipital areas registering a relatively higher error than other regions such as frontal, central or parietal areas. The error in source reconstruction was found to be 0.033 ± 0.016 mm and 0.037 ± 0.017 mm in the left and right cortical hemispheres respectively.

脑电图源定位需要将脑电图传感器位置与受试者的核磁共振成像共同注册,而脑电图传感器位置通常是通过电磁跟踪或使用市售三维扫描仪对受试者头部和脑电图帽进行三维扫描来捕捉的。这两种方法都有缺点,其中电磁追踪速度慢且无法移动,而三维扫描仪价格昂贵。摄影测量法提供了一种具有成本效益的替代方法,但需要多张照片对头部进行采样,并进行良好的空间采样,以充分重建头部表面。重建后,用于在三维头部表面标注电极位置的现有工具的视觉反馈有限,而且不容易适应多模态测量中常见的定制蒙太奇。我们介绍的 Mark3D 是一款开源的集成工具,用于根据手机摄像头视频重建三维头表面。在基于视频的摄影测量重建中,它无需在图像捕捉过程中跟踪空间采样。它还包括模糊检测算法、用于电极和跟踪的用户友好界面,以及与 FieldTrip 和 MNE Python 等流行工具箱的集成。我们将商用手持式三维扫描仪 Einscan-Pro +(Shining 3D Inc.我们利用地面实况(G1)和手机摄像头视频(M1080)重建的头部表面,使用工具中提供的专用用户界面在三维空间中标记脑电图电极位置。电极位置随后被用于为单个受试者形成伪特异性磁共振成像模板,以重建信号源信息。对 G1 和 M1080 对振动触觉刺激的躯体感觉源激活进行了估计和比较。结果发现,G1 和 M1080 的脑电图电极在三维空间中的平均位置误差为 0.09 ± 0.01 毫米,分布于不同的皮层区域,其中颞叶和枕叶区域的误差相对高于额叶、中央或顶叶等其他区域。在左侧和右侧皮质半球,声源重建误差分别为 0.033 ± 0.016 毫米和 0.037 ± 0.017 毫米。
{"title":"Mark3D - A semi-automated open-source toolbox for 3D head- surface reconstruction and electrode position registration using a smartphone camera video.","authors":"Suranjita Ganguly, Malaaika Mihir Chhaya, Ankita Jain, Aditya Koppula, Mohan Raghavan, Kousik Sarathy Sridharan","doi":"10.1007/s11517-024-03228-3","DOIUrl":"https://doi.org/10.1007/s11517-024-03228-3","url":null,"abstract":"<p><p>Source localization in EEG necessitates co-registering the EEG sensor locations with the subject's MRI, where EEG sensor locations are typically captured using electromagnetic tracking or 3D scanning of the subject's head with EEG cap, using commercially available 3D scanners. Both methods have drawbacks, where, electromagnetic tracking is slow and immobile, while 3D scanners are expensive. Photogrammetry offers a cost-effective alternative but requires multiple photos to sample the head, with good spatial sampling to adequately reconstruct the head surface. Post-reconstruction, the existing tools for electrode position labelling on the 3D head-surface have limited visual feedback and do not easily accommodate customized montages, which are typical in multi-modal measurements. We introduce Mark3D, an open-source, integrated tool for 3D head-surface reconstruction from phone camera video. It eliminates the need for keeping track of spatial sampling during image capture for video-based photogrammetry reconstruction. It also includes blur detection algorithms, a user-friendly interface for electrode and tracking, and integrates with popular toolboxes such as FieldTrip and MNE Python. The accuracy of the proposed method was benchmarked with the head-surface derived from a commercially available handheld 3D scanner Einscan-Pro + (Shining 3D Inc.,) which we treat as the \"ground truth\". We used reconstructed head-surfaces of ground truth (G1) and phone camera video (M<sub>1080</sub>) to mark the EEG electrode locations in 3D space using a dedicated UI provided in the tool. The electrode locations were then used to form pseudo-specific MRI templates for individual subjects to reconstruct source information. Somatosensory source activations in response to vibrotactile stimuli were estimated and compared between G1 and M<sub>1080</sub>. The mean positional errors of the EEG electrodes between G1 and M<sub>1080</sub> in 3D space were found to be 0.09 ± 0.01 mm across different cortical areas, with temporal and occipital areas registering a relatively higher error than other regions such as frontal, central or parietal areas. The error in source reconstruction was found to be 0.033 ± 0.016 mm and 0.037 ± 0.017 mm in the left and right cortical hemispheres respectively.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on imaging biomarkers for chronic subdural hematoma recurrence. 慢性硬膜下血肿复发的影像生物标志物研究。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-06 DOI: 10.1007/s11517-024-03232-7
Liyang Wu, Yvmei Zhu, Qiuyong Huang, Shuchao Chen, Haoyang Zhou, Zihao Xu, Bo Li, Hongbo Chen, Junhui Lv

This study utilizes radiomics to explore imaging biomarkers for predicting the recurrence of chronic subdural hematoma (CSDH), aiming to improve the prediction of CSDH recurrence risk. Analyzing CT scans from 64 patients with CSDH, we extracted 107 radiomic features and employed recursive feature elimination (RFE) and the XGBoost algorithm for feature selection and model construction. The feature selection process identified six key imaging biomarkers closely associated with CSDH recurrence: flatness, surface area to volume ratio, energy, run entropy, small area emphasis, and maximum axial diameter. The selection of these imaging biomarkers was based on their significance in predicting CSDH recurrence, revealing deep connections between postoperative variables and recurrence. After feature selection, there was a significant improvement in model performance. The XGBoost model demonstrated the best classification performance, with the average accuracy improving from 46.82% (before feature selection) to 80.74% and the AUC value increasing from 0.5864 to 0.7998. These results prove that precise feature selection significantly enhances the model's predictive capability. This study not only reveals imaging biomarkers for CSDH recurrence but also provides valuable insights for future personalized treatment strategies.

本研究利用放射组学探索预测慢性硬膜下血肿(CSDH)复发的影像生物标志物,旨在改善对CSDH复发风险的预测。通过分析64名CSDH患者的CT扫描图像,我们提取了107个影像学特征,并采用递归特征消除(RFE)和XGBoost算法进行特征选择和模型构建。特征选择过程确定了与 CSDH 复发密切相关的六个关键成像生物标志物:平整度、表面积与体积比、能量、运行熵、小面积强调和最大轴向直径。这些成像生物标志物的选择基于它们在预测 CSDH 复发方面的重要性,揭示了术后变量与复发之间的深层联系。经过特征选择后,模型的性能有了显著提高。XGBoost 模型的分类性能最好,平均准确率从特征选择前的 46.82% 提高到 80.74%,AUC 值从 0.5864 提高到 0.7998。这些结果证明,精确的特征选择大大提高了模型的预测能力。这项研究不仅揭示了CSDH复发的影像生物标志物,还为未来的个性化治疗策略提供了有价值的见解。
{"title":"Research on imaging biomarkers for chronic subdural hematoma recurrence.","authors":"Liyang Wu, Yvmei Zhu, Qiuyong Huang, Shuchao Chen, Haoyang Zhou, Zihao Xu, Bo Li, Hongbo Chen, Junhui Lv","doi":"10.1007/s11517-024-03232-7","DOIUrl":"https://doi.org/10.1007/s11517-024-03232-7","url":null,"abstract":"<p><p>This study utilizes radiomics to explore imaging biomarkers for predicting the recurrence of chronic subdural hematoma (CSDH), aiming to improve the prediction of CSDH recurrence risk. Analyzing CT scans from 64 patients with CSDH, we extracted 107 radiomic features and employed recursive feature elimination (RFE) and the XGBoost algorithm for feature selection and model construction. The feature selection process identified six key imaging biomarkers closely associated with CSDH recurrence: flatness, surface area to volume ratio, energy, run entropy, small area emphasis, and maximum axial diameter. The selection of these imaging biomarkers was based on their significance in predicting CSDH recurrence, revealing deep connections between postoperative variables and recurrence. After feature selection, there was a significant improvement in model performance. The XGBoost model demonstrated the best classification performance, with the average accuracy improving from 46.82% (before feature selection) to 80.74% and the AUC value increasing from 0.5864 to 0.7998. These results prove that precise feature selection significantly enhances the model's predictive capability. This study not only reveals imaging biomarkers for CSDH recurrence but also provides valuable insights for future personalized treatment strategies.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Load-bearing optimization for customized exoskeleton design based on kinematic gait reconstruction. 基于运动步态重建的定制外骨骼承重优化设计。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-06 DOI: 10.1007/s11517-024-03234-5
Zhengxin Tu, Jinghua Xu, Zhenyu Dong, Shuyou Zhang, Jianrong Tan

This paper presents a load-bearing optimization method for customized exoskeleton design based on kinematic gait reconstruction (KGR). For people with acute joint injury, it is no longer probable to obtain the movement gait via computer vision. With this in mind, the 3D reconstruction can be executed from the CT (computed tomography) or MRI (magnetic resonance imaging) of the injured area, in order to generate micro-morphology of the joint occlusion. Innovatively, the disconnected entities can be registered into a whole by surface topography matching with semi-definite computing, further implementing KGR by rebuilding continuous kinematic skeletal flexion postures. To verify the effectiveness of reconstructed kinematic gait, finite element analysis (FEA) is conducted via Hertz contact theory. The lower limb exoskeleton is taken as a verification instance, where rod length ratio and angular rotation range can be set as the design considerations, so as to optimize the load-bearing parameters, which is suitable for individual kinematic gaits. The instance demonstrates that the proposed KGR helps to provide a design paradigm for optimizing load-bearing capacity, on the basis of which the ergonomic customized exoskeleton can be designed from merely medical images, thereby making it more suitable for the large rehabilitation population.

本文介绍了一种基于运动步态重建(KGR)的定制外骨骼设计承重优化方法。对于急性关节损伤患者来说,通过计算机视觉获取运动步态已不再可能。有鉴于此,可以通过受伤部位的 CT(计算机断层扫描)或 MRI(磁共振成像)进行三维重建,以生成关节闭塞的微观形态。创新性的是,可以通过半无限计算的表面形貌匹配,将断开的实体注册为一个整体,通过重建连续的运动骨骼弯曲姿势,进一步实施 KGR。为了验证重建运动步态的有效性,我们通过赫兹接触理论进行了有限元分析(FEA)。以下肢外骨骼作为验证实例,在设计时可考虑设置杆长比和角度旋转范围,从而优化承重参数,使其适用于个体运动步态。该实例表明,所提出的KGR有助于提供优化承重能力的设计范例,在此基础上,仅凭医学图像就能设计出符合人体工程学的定制外骨骼,从而使其更适合广大康复人群。
{"title":"Load-bearing optimization for customized exoskeleton design based on kinematic gait reconstruction.","authors":"Zhengxin Tu, Jinghua Xu, Zhenyu Dong, Shuyou Zhang, Jianrong Tan","doi":"10.1007/s11517-024-03234-5","DOIUrl":"https://doi.org/10.1007/s11517-024-03234-5","url":null,"abstract":"<p><p>This paper presents a load-bearing optimization method for customized exoskeleton design based on kinematic gait reconstruction (KGR). For people with acute joint injury, it is no longer probable to obtain the movement gait via computer vision. With this in mind, the 3D reconstruction can be executed from the CT (computed tomography) or MRI (magnetic resonance imaging) of the injured area, in order to generate micro-morphology of the joint occlusion. Innovatively, the disconnected entities can be registered into a whole by surface topography matching with semi-definite computing, further implementing KGR by rebuilding continuous kinematic skeletal flexion postures. To verify the effectiveness of reconstructed kinematic gait, finite element analysis (FEA) is conducted via Hertz contact theory. The lower limb exoskeleton is taken as a verification instance, where rod length ratio and angular rotation range can be set as the design considerations, so as to optimize the load-bearing parameters, which is suitable for individual kinematic gaits. The instance demonstrates that the proposed KGR helps to provide a design paradigm for optimizing load-bearing capacity, on the basis of which the ergonomic customized exoskeleton can be designed from merely medical images, thereby making it more suitable for the large rehabilitation population.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of three-dimensional esophageal tumor ablation by simultaneous functioning of multiple electrodes. 通过多个电极同时发挥作用优化三维食管肿瘤消融。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-04 DOI: 10.1007/s11517-024-03230-9
Hongying Wang, Jincheng Zou, Shiqing Zhao, Aili Zhang

Radiofrequency ablation is a widely accepted minimal-invasive and effective local treatment for tumors. However, its current application in esophageal cancer treatment is limited to targeting thin and superficial lesions, such as Barrett's Esophagus. This study proposes an optimization method using multiple electrodes simultaneously to regulate the temperature field and achieve conformal ablation of tumors. A particle swarm optimization algorithm, coupled with a three-dimensional thermal ablation model, was developed to optimize the status of the functioning electrodes, the optimal voltage (Vopt), and treatment duration (ttre) for targeted esophageal tumors. This approach takes into account both the electrical and thermal interactions of the electrodes. The results indicate that for esophageal cancers at various stages, with thickness (c) ranging from 4.5 mm to 10.0 mm, major axis (a) ranging from 7.3 mm to 27.3 mm, and minor axis (b) equaling 7.3 mm or 27.3 mm, as well as non-symmetrical geometries, complete tumor coverage (over 99.5%) close to conformal can be achieved. This method illustrates possible precise conformal ablation of esophageal cancers and it may also be used for conformal treatments of other intraluminal lesions.

射频消融是一种被广泛接受的微创、有效的局部肿瘤治疗方法。然而,目前其在食管癌治疗中的应用仅限于针对薄而浅的病灶,如巴雷特食管。本研究提出了一种同时使用多个电极来调节温度场并实现肿瘤保形消融的优化方法。粒子群优化算法与三维热消融模型相结合,可优化功能电极的状态、最佳电压(Vopt)和针对食管肿瘤的治疗持续时间(ttre)。这种方法同时考虑了电极的电相互作用和热相互作用。结果表明,对于处于不同阶段的食管癌,厚度(c)从 4.5 毫米到 10.0 毫米不等,主轴(a)从 7.3 毫米到 27.3 毫米不等,小轴(b)等于 7.3 毫米或 27.3 毫米,以及非对称几何形状,都能实现接近保形的完全肿瘤覆盖(超过 99.5%)。这种方法说明了精确适形消融食管癌的可能性,也可用于其他腔内病变的适形治疗。
{"title":"Optimization of three-dimensional esophageal tumor ablation by simultaneous functioning of multiple electrodes.","authors":"Hongying Wang, Jincheng Zou, Shiqing Zhao, Aili Zhang","doi":"10.1007/s11517-024-03230-9","DOIUrl":"https://doi.org/10.1007/s11517-024-03230-9","url":null,"abstract":"<p><p>Radiofrequency ablation is a widely accepted minimal-invasive and effective local treatment for tumors. However, its current application in esophageal cancer treatment is limited to targeting thin and superficial lesions, such as Barrett's Esophagus. This study proposes an optimization method using multiple electrodes simultaneously to regulate the temperature field and achieve conformal ablation of tumors. A particle swarm optimization algorithm, coupled with a three-dimensional thermal ablation model, was developed to optimize the status of the functioning electrodes, the optimal voltage (V<sub>opt</sub>), and treatment duration (t<sub>tre</sub>) for targeted esophageal tumors. This approach takes into account both the electrical and thermal interactions of the electrodes. The results indicate that for esophageal cancers at various stages, with thickness (c) ranging from 4.5 mm to 10.0 mm, major axis (a) ranging from 7.3 mm to 27.3 mm, and minor axis (b) equaling 7.3 mm or 27.3 mm, as well as non-symmetrical geometries, complete tumor coverage (over 99.5%) close to conformal can be achieved. This method illustrates possible precise conformal ablation of esophageal cancers and it may also be used for conformal treatments of other intraluminal lesions.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised cervical cell instance segmentation method integrating cellular characteristics. 整合细胞特征的无监督宫颈细胞实例分割方法。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-04 DOI: 10.1007/s11517-024-03222-9
Yining Xie, Jingling Gao, Xueyan Bi, Jing Zhao

Cell instance segmentation is a key technology for cervical cancer auxiliary diagnosis systems. However, pixel-level annotation is time-consuming and labor-intensive, making it difficult to obtain a large amount of annotated data. This results in the model not being fully trained. In response to these problems, this paper proposes an unsupervised cervical cell instance segmentation method that integrates cell characteristics. Cervical cells have a clear corresponding structure between the nucleus and cytoplasm. This method fully takes this feature into account by building a dual-flow framework to locate the nucleus and cytoplasm and generate high-quality pseudo-labels. In the nucleus segmentation stage, the position and range of the nucleus are determined using the standard cell-restricted nucleus segmentation method. In the cytoplasm segmentation stage, a multi-angle collaborative segmentation method is used to achieve the positioning of the cytoplasm. First, taking advantage of the self-similarity characteristics of pixel blocks in cells, a cytoplasmic segmentation method based on self-similarity map iteration is proposed. The pixel blocks are mapped from the perspective of local details, and the iterative segmentation is repeated. Secondly, using low-level features such as cell color and shape, a self-supervised heatmap-aware cytoplasm segmentation method is proposed to obtain the activation map of the cytoplasm from the perspective of global attention. The two methods are fused to determine cytoplasmic regions, and combined with nuclear locations, high-quality pseudo-labels are generated. These pseudo-labels are used to train the model cyclically, and the loss strategy is used to encourage the model to discover new object masks, thereby obtaining a segmentation model with better performance. Experimental results show that this method achieves good results in cytoplasm segmentation. On the three datasets of ISBI, MS_CellSeg, and Cx22, 54.32%, 44.64%, and 66.52% AJI were obtained, respectively, which is better than other typical unsupervised methods selected in this article.

细胞实例分割是宫颈癌辅助诊断系统的一项关键技术。然而,像素级标注耗时耗力,难以获得大量标注数据。这导致模型无法得到充分训练。针对这些问题,本文提出了一种结合细胞特征的无监督宫颈细胞实例分割方法。宫颈细胞的细胞核和细胞质之间有明显的对应结构。该方法充分考虑了这一特点,通过构建双流框架来定位细胞核和细胞质,并生成高质量的伪标签。在细胞核分割阶段,使用标准的细胞限制性细胞核分割方法确定细胞核的位置和范围。在细胞质分割阶段,采用多角度协作分割方法实现细胞质的定位。首先,利用细胞中像素块的自相似性特征,提出了一种基于自相似性映射迭代的细胞质分割方法。从局部细节的角度对像素块进行映射,并重复迭代分割。其次,利用细胞颜色和形状等低层次特征,提出了一种自监督热图感知细胞质分割方法,从全局关注的角度获取细胞质的激活图。两种方法融合后可确定细胞质区域,并结合细胞核位置生成高质量的伪标签。这些伪标签用于循环训练模型,损失策略用于鼓励模型发现新的对象掩码,从而获得性能更好的分割模型。实验结果表明,该方法在细胞质分割方面取得了良好的效果。在 ISBI、MS_CellSeg 和 Cx22 三个数据集上,分别获得了 54.32%、44.64% 和 66.52% 的 AJI,优于本文选取的其他典型无监督方法。
{"title":"Unsupervised cervical cell instance segmentation method integrating cellular characteristics.","authors":"Yining Xie, Jingling Gao, Xueyan Bi, Jing Zhao","doi":"10.1007/s11517-024-03222-9","DOIUrl":"https://doi.org/10.1007/s11517-024-03222-9","url":null,"abstract":"<p><p>Cell instance segmentation is a key technology for cervical cancer auxiliary diagnosis systems. However, pixel-level annotation is time-consuming and labor-intensive, making it difficult to obtain a large amount of annotated data. This results in the model not being fully trained. In response to these problems, this paper proposes an unsupervised cervical cell instance segmentation method that integrates cell characteristics. Cervical cells have a clear corresponding structure between the nucleus and cytoplasm. This method fully takes this feature into account by building a dual-flow framework to locate the nucleus and cytoplasm and generate high-quality pseudo-labels. In the nucleus segmentation stage, the position and range of the nucleus are determined using the standard cell-restricted nucleus segmentation method. In the cytoplasm segmentation stage, a multi-angle collaborative segmentation method is used to achieve the positioning of the cytoplasm. First, taking advantage of the self-similarity characteristics of pixel blocks in cells, a cytoplasmic segmentation method based on self-similarity map iteration is proposed. The pixel blocks are mapped from the perspective of local details, and the iterative segmentation is repeated. Secondly, using low-level features such as cell color and shape, a self-supervised heatmap-aware cytoplasm segmentation method is proposed to obtain the activation map of the cytoplasm from the perspective of global attention. The two methods are fused to determine cytoplasmic regions, and combined with nuclear locations, high-quality pseudo-labels are generated. These pseudo-labels are used to train the model cyclically, and the loss strategy is used to encourage the model to discover new object masks, thereby obtaining a segmentation model with better performance. Experimental results show that this method achieves good results in cytoplasm segmentation. On the three datasets of ISBI, MS_CellSeg, and Cx22, 54.32%, 44.64%, and 66.52% AJI were obtained, respectively, which is better than other typical unsupervised methods selected in this article.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SMILEY-assistive application to support social and emotional skills in SPCD individuals. SMILEY--辅助应用程序,支持 SPCD 个人的社交和情感技能。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-01 Epub Date: 2024-06-18 DOI: 10.1007/s11517-024-03151-7
Muskan Chawla, Surya Narayan Panda, Vikas Khullar

According to the available studies, mobile applications have provided significant support in improving the diverse skills of special individuals with social pragmatic communication disorder (SPCD). Over the last decade, SPCD has affected 8 to 11% of individuals, and therapy sessions cost between $50 and $150 per hour. This preliminary study aims to develop an interactive, user-friendly intervention to enhance social and emotional interaction skills in individuals with SPCD. The proposed intervention is an Android application that enhances social and emotional interaction skills. This pilot study involved 29 human subjects aged 7-13 years with pragmatic communication deficits. In a randomized controlled trial, the intervention was developed and implemented with consideration of caregiver and professional requirements. The improvement was analyzed using standard scales, including the Social Communication Questionnaire (SCQ) and the Social Communication Disorder Scale (SCDS). Moreover, the outcomes were examined through statistical parameters (mean, standard deviation) and tests (t-test). The intervention significantly improved the social and emotional skills of individuals with deficits. Before using the intervention, the identified statistical values for SCQ (mean = 6.48 and standard deviation = 3.37) and SCDS (mean = 8.17 and standard deviation = 4.79). However, after using the intervention, values for SCQ (mean = 8.24 and standard deviation = 3.95) and SCDS (mean = 9.48 and standard deviation = 4.72) were improved in comparison to the before-intervention outcome. The evaluation of the t-scores and p-values indicates that there has been significant improvement in the performance of individuals after the successful completion of the intervention. The proposed and applied intervention resulted in a significant impact in terms of improvement in social and emotional skills. The study concluded that it allows individuals to practice social and emotional interaction skills in a structured, controlled, and interactive environment. The proposed intervention has been found acceptable as per the reviews of caregivers and professionals, based on essential criteria including user experience, usability, interactive nature, reliability, and creditability.

根据现有研究,移动应用程序在改善患有社交实用沟通障碍(SPCD)的特殊人群的各种技能方面提供了重要支持。在过去十年中,有 8% 至 11% 的人受到 SPCD 的影响,治疗费用在每小时 50 美元至 150 美元之间。这项初步研究旨在开发一种交互式、用户友好型干预措施,以提高 SPCD 患者的社交和情感互动技能。拟议的干预措施是一款可提高社交和情感互动技能的安卓应用程序。这项试点研究涉及 29 名年龄在 7-13 岁之间、有实际交流障碍的受试者。在随机对照试验中,干预措施的开发和实施考虑到了照顾者和专业人员的要求。研究人员使用标准量表,包括社会交流问卷(SCQ)和社会交流障碍量表(SCDS),对受试者的改善情况进行了分析。此外,还通过统计参数(平均值、标准差)和检验(t 检验)对结果进行了检验。干预明显改善了有社交和情感障碍的个体的社交和情感技能。在使用干预措施之前,SCQ(平均值=6.48,标准差=3.37)和 SCDS(平均值=8.17,标准差=4.79)的统计值已经确定。然而,在使用干预措施后,SCQ(平均值 = 8.24,标准差 = 3.95)和 SCDS(平均值 = 9.48,标准差 = 4.72)的数值与干预前的结果相比有所改善。对 t 值和 p 值的评估表明,在成功完成干预后,个人的表现有了显著改善。建议和应用的干预措施对社交和情感技能的改善产生了重大影响。研究得出的结论是,该干预措施能让个人在有组织、可控和互动的环境中练习社交和情感互动技能。根据用户体验、可用性、互动性、可靠性和可信度等基本标准,护理人员和专业人员的评论认为拟议的干预措施是可以接受的。
{"title":"SMILEY-assistive application to support social and emotional skills in SPCD individuals.","authors":"Muskan Chawla, Surya Narayan Panda, Vikas Khullar","doi":"10.1007/s11517-024-03151-7","DOIUrl":"10.1007/s11517-024-03151-7","url":null,"abstract":"<p><p>According to the available studies, mobile applications have provided significant support in improving the diverse skills of special individuals with social pragmatic communication disorder (SPCD). Over the last decade, SPCD has affected 8 to 11% of individuals, and therapy sessions cost between $50 and $150 per hour. This preliminary study aims to develop an interactive, user-friendly intervention to enhance social and emotional interaction skills in individuals with SPCD. The proposed intervention is an Android application that enhances social and emotional interaction skills. This pilot study involved 29 human subjects aged 7-13 years with pragmatic communication deficits. In a randomized controlled trial, the intervention was developed and implemented with consideration of caregiver and professional requirements. The improvement was analyzed using standard scales, including the Social Communication Questionnaire (SCQ) and the Social Communication Disorder Scale (SCDS). Moreover, the outcomes were examined through statistical parameters (mean, standard deviation) and tests (t-test). The intervention significantly improved the social and emotional skills of individuals with deficits. Before using the intervention, the identified statistical values for SCQ (mean = 6.48 and standard deviation = 3.37) and SCDS (mean = 8.17 and standard deviation = 4.79). However, after using the intervention, values for SCQ (mean = 8.24 and standard deviation = 3.95) and SCDS (mean = 9.48 and standard deviation = 4.72) were improved in comparison to the before-intervention outcome. The evaluation of the t-scores and p-values indicates that there has been significant improvement in the performance of individuals after the successful completion of the intervention. The proposed and applied intervention resulted in a significant impact in terms of improvement in social and emotional skills. The study concluded that it allows individuals to practice social and emotional interaction skills in a structured, controlled, and interactive environment. The proposed intervention has been found acceptable as per the reviews of caregivers and professionals, based on essential criteria including user experience, usability, interactive nature, reliability, and creditability.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3507-3529"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141421648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft-tissue sound-speed-aware ultrasound-CT registration method for computer-assisted orthopedic surgery. 用于计算机辅助骨科手术的软组织声速感知超声-CT 注册方法。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-01 Epub Date: 2024-06-07 DOI: 10.1007/s11517-024-03123-x
Chuanba Liu, Wenshuo Wang, Tao Sun, Yimin Song

Ultrasound (US) has been introduced to computer-assisted orthopedic surgery for bone registration owing to its advantages of nonionizing radiation, low cost, and noninvasiveness. However, the registration accuracy is limited by US image distortion caused by variations in the acoustic properties of soft tissues. This paper proposes a soft-tissue sound-speed-aware registration method to overcome the above challenge. First, the feature enhancement strategy of multi-channel overlay is proposed for U2-net to improve bone segmentation performance. Secondly, the sound speed of soft tissue is estimated by simulating the bone surface distance map for the update of US-derived points. Finally, an iterative registration strategy is adopted to optimize the registration result. A phantom experiment was conducted using different registration methods for the femur and tibia/fibula. The fiducial registration error (femur, 0.98 ± 0.08 mm (mean ± SD); tibia/fibula, 1.29 ± 0.19 mm) and the target registration error (less than 2.11 mm) showed the high accuracy of the proposed method. The experimental results suggest that the proposed method can be integrated into navigation systems that provide surgeons with accurate 3D navigation information.

由于超声波(US)具有非电离辐射、低成本和无创等优点,它已被引入计算机辅助骨科手术中用于骨骼配准。然而,由于软组织声学特性的变化导致超声图像失真,从而限制了配准的准确性。本文提出了一种软组织声速感知配准方法来克服上述难题。首先,针对 U2 网络提出了多通道叠加的特征增强策略,以提高骨分割性能。其次,通过模拟骨表面距离图来估计软组织的声速,从而更新 US 导出点。最后,采用迭代配准策略优化配准结果。对股骨和胫骨/腓骨采用不同的配准方法进行了模型实验。靶标配准误差(股骨,0.98 ± 0.08 mm(平均 ± SD);胫骨/腓骨,1.29 ± 0.19 mm)和目标配准误差(小于 2.11 mm)显示了所提方法的高精确度。实验结果表明,提出的方法可以集成到导航系统中,为外科医生提供精确的三维导航信息。
{"title":"Soft-tissue sound-speed-aware ultrasound-CT registration method for computer-assisted orthopedic surgery.","authors":"Chuanba Liu, Wenshuo Wang, Tao Sun, Yimin Song","doi":"10.1007/s11517-024-03123-x","DOIUrl":"10.1007/s11517-024-03123-x","url":null,"abstract":"<p><p>Ultrasound (US) has been introduced to computer-assisted orthopedic surgery for bone registration owing to its advantages of nonionizing radiation, low cost, and noninvasiveness. However, the registration accuracy is limited by US image distortion caused by variations in the acoustic properties of soft tissues. This paper proposes a soft-tissue sound-speed-aware registration method to overcome the above challenge. First, the feature enhancement strategy of multi-channel overlay is proposed for U<sup>2</sup>-net to improve bone segmentation performance. Secondly, the sound speed of soft tissue is estimated by simulating the bone surface distance map for the update of US-derived points. Finally, an iterative registration strategy is adopted to optimize the registration result. A phantom experiment was conducted using different registration methods for the femur and tibia/fibula. The fiducial registration error (femur, 0.98 ± 0.08 mm (mean ± SD); tibia/fibula, 1.29 ± 0.19 mm) and the target registration error (less than 2.11 mm) showed the high accuracy of the proposed method. The experimental results suggest that the proposed method can be integrated into navigation systems that provide surgeons with accurate 3D navigation information.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3385-3396"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic text classification of prostate cancer malignancy scores in radiology reports using NLP models. 使用 NLP 模型对放射学报告中的前列腺癌恶性程度评分进行自动文本分类。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-01 Epub Date: 2024-06-07 DOI: 10.1007/s11517-024-03131-x
Jaime Collado-Montañez, Pilar López-Úbeda, Mariia Chizhikova, M Carlos Díaz-Galiano, L Alfonso Ureña-López, Teodoro Martín-Noguerol, Antonio Luna, M Teresa Martín-Valdivia

This paper presents the implementation of two automated text classification systems for prostate cancer findings based on the PI-RADS criteria. Specifically, a traditional machine learning model using XGBoost and a language model-based approach using RoBERTa were employed. The study focused on Spanish-language radiological MRI prostate reports, which has not been explored before. The results demonstrate that the RoBERTa model outperforms the XGBoost model, although both achieve promising results. Furthermore, the best-performing system was integrated into the radiological company's information systems as an API, operating in a real-world environment.

本文介绍了基于 PI-RADS 标准的两种前列腺癌检查结果自动文本分类系统的实施情况。具体来说,该系统采用了使用 XGBoost 的传统机器学习模型和使用 RoBERTa 的基于语言模型的方法。研究的重点是西班牙语的核磁共振前列腺放射报告,这在以前还没有过探索。结果表明,RoBERTa 模型优于 XGBoost 模型,尽管两者都取得了可喜的成果。此外,表现最好的系统作为 API 集成到了放射公司的信息系统中,在真实环境中运行。
{"title":"Automatic text classification of prostate cancer malignancy scores in radiology reports using NLP models.","authors":"Jaime Collado-Montañez, Pilar López-Úbeda, Mariia Chizhikova, M Carlos Díaz-Galiano, L Alfonso Ureña-López, Teodoro Martín-Noguerol, Antonio Luna, M Teresa Martín-Valdivia","doi":"10.1007/s11517-024-03131-x","DOIUrl":"10.1007/s11517-024-03131-x","url":null,"abstract":"<p><p>This paper presents the implementation of two automated text classification systems for prostate cancer findings based on the PI-RADS criteria. Specifically, a traditional machine learning model using XGBoost and a language model-based approach using RoBERTa were employed. The study focused on Spanish-language radiological MRI prostate reports, which has not been explored before. The results demonstrate that the RoBERTa model outperforms the XGBoost model, although both achieve promising results. Furthermore, the best-performing system was integrated into the radiological company's information systems as an API, operating in a real-world environment.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3373-3383"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11485118/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-label classification of retinal diseases based on fundus images using Resnet and Transformer. 使用 Resnet 和 Transformer 根据眼底图像对视网膜疾病进行多标签分类。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-01 Epub Date: 2024-06-14 DOI: 10.1007/s11517-024-03144-6
Jiaqing Zhao, Jianfeng Zhu, Jiangnan He, Guogang Cao, Cuixia Dai

Retinal disorders are a major cause of irreversible vision loss, which can be mitigated through accurate and early diagnosis. Conventionally, fundus images are used as the gold diagnosis standard in detecting retinal diseases. In recent years, more and more researchers have employed deep learning methods for diagnosing ophthalmic diseases using fundus photography datasets. Among the studies, most of them focus on diagnosing a single disease in fundus images, making it still challenging for the diagnosis of multiple diseases. In this paper, we propose a framework that combines ResNet and Transformer for multi-label classification of retinal disease. This model employs ResNet to extract image features, utilizes Transformer to capture global information, and enhances the relationships between categories through learnable label embedding. On the publicly available Ocular Disease Intelligent Recognition (ODIR-5 k) dataset, the proposed method achieves a mean average precision of 92.86%, an area under the curve (AUC) of 97.27%, and a recall of 90.62%, which outperforms other state-of-the-art approaches for the multi-label classification. The proposed method represents a significant advancement in the field of retinal disease diagnosis, offering a more accurate, efficient, and comprehensive model for the detection of multiple retinal conditions.

视网膜疾病是造成不可逆视力损失的主要原因,而通过准确和早期诊断可以减轻视力损失。传统上,眼底图像是检测视网膜疾病的黄金诊断标准。近年来,越来越多的研究人员采用深度学习方法,利用眼底摄影数据集诊断眼科疾病。在这些研究中,大多数研究侧重于诊断眼底图像中的单一疾病,这使得诊断多种疾病仍具有挑战性。在本文中,我们提出了一个结合 ResNet 和 Transformer 的框架,用于视网膜疾病的多标签分类。该模型采用 ResNet 提取图像特征,利用 Transformer 捕捉全局信息,并通过可学习的标签嵌入增强类别之间的关系。在公开的眼科疾病智能识别(ODIR-5 k)数据集上,该方法的平均精确度达到 92.86%,曲线下面积(AUC)达到 97.27%,召回率达到 90.62%,在多标签分类方面优于其他先进方法。所提出的方法代表了视网膜疾病诊断领域的一大进步,为多种视网膜疾病的检测提供了一个更准确、更高效、更全面的模型。
{"title":"Multi-label classification of retinal diseases based on fundus images using Resnet and Transformer.","authors":"Jiaqing Zhao, Jianfeng Zhu, Jiangnan He, Guogang Cao, Cuixia Dai","doi":"10.1007/s11517-024-03144-6","DOIUrl":"10.1007/s11517-024-03144-6","url":null,"abstract":"<p><p>Retinal disorders are a major cause of irreversible vision loss, which can be mitigated through accurate and early diagnosis. Conventionally, fundus images are used as the gold diagnosis standard in detecting retinal diseases. In recent years, more and more researchers have employed deep learning methods for diagnosing ophthalmic diseases using fundus photography datasets. Among the studies, most of them focus on diagnosing a single disease in fundus images, making it still challenging for the diagnosis of multiple diseases. In this paper, we propose a framework that combines ResNet and Transformer for multi-label classification of retinal disease. This model employs ResNet to extract image features, utilizes Transformer to capture global information, and enhances the relationships between categories through learnable label embedding. On the publicly available Ocular Disease Intelligent Recognition (ODIR-5 k) dataset, the proposed method achieves a mean average precision of 92.86%, an area under the curve (AUC) of 97.27%, and a recall of 90.62%, which outperforms other state-of-the-art approaches for the multi-label classification. The proposed method represents a significant advancement in the field of retinal disease diagnosis, offering a more accurate, efficient, and comprehensive model for the detection of multiple retinal conditions.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3459-3469"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141318749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VascuConNet: an enhanced connectivity network for vascular segmentation. VascuConNet:用于血管分割的增强型连接网络。
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-11-01 Epub Date: 2024-06-20 DOI: 10.1007/s11517-024-03150-8
Muwei Jian, Ronghua Wu, Wenjin Xu, Huixiang Zhi, Chen Tao, Hongyu Chen, Xiaoguang Li

Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.

医学图像分割通常涉及不同的组织类型和结构,包括血管分割和神经纤维束分割等任务。提高分割结果的连续性是医学图像分割中的一项关键挑战,这是由临床应用的需求驱动的,重点是疾病定位和量化。本研究专门为视网膜血管分割设计了一种新型分割模型,利用血管方向信息、边界约束和连续性约束来提高分割精度。为此,我们将 U-Net 与长短期记忆网络(LSTM)级联。U-Net 的特点是参数数量少、分割效率高,而 LSTM 则具有参数共享能力。此外,我们还在模型底层引入了方向信息增强模块,通过方向卷积算子获得包含方向信息的特征图。此外,我们还设计了一种新的混合损失函数,由连接性损失、边界损失和交叉熵损失组成。实验结果表明,该模型在 CHASE_DB1、DRIVE 和 ARIA 这三个广受认可的视网膜血管分割数据集上取得了出色的分割效果。
{"title":"VascuConNet: an enhanced connectivity network for vascular segmentation.","authors":"Muwei Jian, Ronghua Wu, Wenjin Xu, Huixiang Zhi, Chen Tao, Hongyu Chen, Xiaoguang Li","doi":"10.1007/s11517-024-03150-8","DOIUrl":"10.1007/s11517-024-03150-8","url":null,"abstract":"<p><p>Medical image segmentation commonly involves diverse tissue types and structures, including tasks such as blood vessel segmentation and nerve fiber bundle segmentation. Enhancing the continuity of segmentation outcomes represents a pivotal challenge in medical image segmentation, driven by the demands of clinical applications, focusing on disease localization and quantification. In this study, a novel segmentation model is specifically designed for retinal vessel segmentation, leveraging vessel orientation information, boundary constraints, and continuity constraints to improve segmentation accuracy. To achieve this, we cascade U-Net with a long-short-term memory network (LSTM). U-Net is characterized by a small number of parameters and high segmentation efficiency, while LSTM offers a parameter-sharing capability. Additionally, we introduce an orientation information enhancement module inserted into the model's bottom layer to obtain feature maps containing orientation information through an orientation convolution operator. Furthermore, we design a new hybrid loss function that consists of connectivity loss, boundary loss, and cross-entropy loss. Experimental results demonstrate that the model achieves excellent segmentation outcomes across three widely recognized retinal vessel segmentation datasets, CHASE_DB1, DRIVE, and ARIA.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3543-3554"},"PeriodicalIF":2.6,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141428092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical & Biological Engineering & Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1