首页 > 最新文献

2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)最新文献

英文 中文
A novel CNN model with dense connectivity and attention mechanism for arrhythmia classification 一种具有密集连接和注意机制的心律失常分类CNN新模型
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00016
Qin Zhan, Peilin Li, Yongle Wu, Jingchun Huang, Xunde Dong
Cardiac arrhythmia is a common cardiovascular disease that can cause sudden death in severe cases. Electro-cardiography (ECG) is the most well-known and widely applied method for heart diseases detection. Computer-aided diagnosis of ECG can help improve physician efficiency and reduce the rate of misdiagnosis of ECG. In this paper, we propose a method for arrhythmia classification based on the dense convolutional network (DenseNet) and efficient channel attention (ECA). Evaluation experiments were performed using the ECG records from the MIT-BIH database. The accuracy, sensitivity, specificity, and F1 values of 99.69%, 97.55%, 99.81%, and 97.72% were achieved for the six types of heartbeats classification, respectively. The experimental results demonstrate the validity and feasibility of the method, which can be used for ECG screening.
心律失常是一种常见的心血管疾病,严重者可导致猝死。心电图(electrocardiography, ECG)是最知名和应用最广泛的心脏病检测方法。心电计算机辅助诊断有助于提高医师工作效率,降低心电误诊率。本文提出了一种基于密集卷积网络(DenseNet)和有效通道注意(ECA)的心律失常分类方法。评估实验使用来自MIT-BIH数据库的心电记录进行。6种心跳分类的准确率、灵敏度、特异性和F1值分别为99.69%、97.55%、99.81%和97.72%。实验结果证明了该方法的有效性和可行性,可用于心电筛查。
{"title":"A novel CNN model with dense connectivity and attention mechanism for arrhythmia classification","authors":"Qin Zhan, Peilin Li, Yongle Wu, Jingchun Huang, Xunde Dong","doi":"10.1109/CBMS55023.2022.00016","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00016","url":null,"abstract":"Cardiac arrhythmia is a common cardiovascular disease that can cause sudden death in severe cases. Electro-cardiography (ECG) is the most well-known and widely applied method for heart diseases detection. Computer-aided diagnosis of ECG can help improve physician efficiency and reduce the rate of misdiagnosis of ECG. In this paper, we propose a method for arrhythmia classification based on the dense convolutional network (DenseNet) and efficient channel attention (ECA). Evaluation experiments were performed using the ECG records from the MIT-BIH database. The accuracy, sensitivity, specificity, and F1 values of 99.69%, 97.55%, 99.81%, and 97.72% were achieved for the six types of heartbeats classification, respectively. The experimental results demonstrate the validity and feasibility of the method, which can be used for ECG screening.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122536642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Exploiting AI to make insulin pens smart: injection site recognition and lipodystrophy detection 利用人工智能使胰岛素笔智能化:注射部位识别和脂肪营养不良检测
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00044
E. Torre, Luisa Francini, E. Cordelli, R. Sicilia, S. Manfrini, V. Piemonte, P. Soda
Nowadays diabetes still remains one of the leading causes of death worldwide and it has serious consequences if not properly treated. The advent of hybrid closed-loop systems, connection with consumer electronics and cloud-based data systems have hastened the advancement of diabetes technology. In the wake of this progress, we exploit information technology to make insulin pens smart so as to promote adherence to injection therapy and improve the socio-economic impact for the patient. In this respect, this work focuses on two main open issues, namely injection site rotation and lipodystrophies detection while the patient is taking the insulin. The first one is addressed collecting data with IMU sensor which are processed by a machine learning classifier to detect the injection site. The second one is tackled through a sensor equipped with two leds: features computed from such signals fed a one-class Support Vector Machine trained to recognise healthy tissue, so that samples different from those in the training set can be considered as lipodystrophies. The results obtained for the injection site recognition show an average accuracy larger than 0.957, whilst in the case of lipodystrophies detection we reach an accuracy greater than 0.95 using the IR led.
如今,糖尿病仍然是世界范围内导致死亡的主要原因之一,如果治疗不当,后果将十分严重。混合闭环系统的出现、与消费电子产品的连接以及基于云的数据系统加速了糖尿病技术的进步。随着这一进展,我们利用信息技术使胰岛素笔智能化,以促进对注射治疗的坚持,并改善对患者的社会经济影响。在这方面,本研究的重点是两个主要的开放性问题,即注射部位旋转和患者服用胰岛素时脂肪营养不良的检测。第一个是用IMU传感器收集数据,通过机器学习分类器处理数据以检测注射部位。第二个是通过一个装有两个led的传感器来处理的:从这些信号中计算出的特征被输入到一个训练识别健康组织的一类支持向量机中,这样与训练集中的样本不同的样本就可以被认为是脂肪营养不良。结果显示,注射部位识别的平均准确度大于0.957,而在脂肪营养不良检测的情况下,我们使用红外led达到的准确度大于0.95。
{"title":"Exploiting AI to make insulin pens smart: injection site recognition and lipodystrophy detection","authors":"E. Torre, Luisa Francini, E. Cordelli, R. Sicilia, S. Manfrini, V. Piemonte, P. Soda","doi":"10.1109/CBMS55023.2022.00044","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00044","url":null,"abstract":"Nowadays diabetes still remains one of the leading causes of death worldwide and it has serious consequences if not properly treated. The advent of hybrid closed-loop systems, connection with consumer electronics and cloud-based data systems have hastened the advancement of diabetes technology. In the wake of this progress, we exploit information technology to make insulin pens smart so as to promote adherence to injection therapy and improve the socio-economic impact for the patient. In this respect, this work focuses on two main open issues, namely injection site rotation and lipodystrophies detection while the patient is taking the insulin. The first one is addressed collecting data with IMU sensor which are processed by a machine learning classifier to detect the injection site. The second one is tackled through a sensor equipped with two leds: features computed from such signals fed a one-class Support Vector Machine trained to recognise healthy tissue, so that samples different from those in the training set can be considered as lipodystrophies. The results obtained for the injection site recognition show an average accuracy larger than 0.957, whilst in the case of lipodystrophies detection we reach an accuracy greater than 0.95 using the IR led.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114226580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Pre- and Post-contrast Representation for Breast Cancer Segmentation in DCE-MRI 学习DCE-MRI中乳腺癌分割的对比前后表征
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00070
Hong Wu, Yingwen Huo, Yupeng Pan, Zeyan Xu, Rian Huang, Yu Xie, Chu Han, Zaiyi Liu, Yi Wang
Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a considerable role in high-risk breast cancer diagnosis and image-based prognostic prediction. The accurate and robust segmentation of cancerous regions is with clinical demands. However, automatic segmentation remains challenging, due to the large variations of cancers in shape and size, and the class-imbalance issue. To tackle these problems, we offer a two-stage framework, which leverages both pre- and post-contrast images for the segmentation of breast cancer. Specifically, we first employ a breast segmentation network, which generates the breast region of interest (ROI) thus removing confounding information from thorax region in DCE-MRI. Furthermore, based on the generated breast ROI, we offer an attention network to learn both pre- and post-contrast representations for distinguishing cancerous regions from the normal breast tissue. The efficacy of our framework is evaluated on a collected dataset of 261 patients with biopsy-proven breast cancers. Experimental results demonstrate our method attains a Dice coefficient of 91.11% for breast cancer segmentation. The proposed framework provides an effective cancer segmentation solution for breast examination using DCE-MRI. The code is publicly available at https://github.com/2313595986/BreastCancerMRI.
乳腺动态对比增强磁共振成像(DCE-MRI)在高危乳腺癌诊断和基于图像的预后预测中发挥着重要作用。准确、稳健的肿瘤区域分割符合临床需要。然而,由于癌症在形状和大小上的巨大变化以及类别不平衡问题,自动分割仍然具有挑战性。为了解决这些问题,我们提供了一个两阶段的框架,它利用前后对比图像来分割乳腺癌。具体而言,我们首先采用乳房分割网络,该网络生成乳房感兴趣区域(ROI),从而去除DCE-MRI中胸部区域的混淆信息。此外,基于生成的乳房ROI,我们提供了一个关注网络来学习对比前和对比后的表示,以区分癌变区域和正常乳房组织。我们的框架的有效性是在收集的261例活检证实的乳腺癌患者的数据集上进行评估的。实验结果表明,该方法对乳腺癌的分割得到了91.11%的Dice系数。该框架为乳腺DCE-MRI检查提供了有效的肿瘤分割解决方案。该代码可在https://github.com/2313595986/BreastCancerMRI上公开获得。
{"title":"Learning Pre- and Post-contrast Representation for Breast Cancer Segmentation in DCE-MRI","authors":"Hong Wu, Yingwen Huo, Yupeng Pan, Zeyan Xu, Rian Huang, Yu Xie, Chu Han, Zaiyi Liu, Yi Wang","doi":"10.1109/CBMS55023.2022.00070","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00070","url":null,"abstract":"Breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a considerable role in high-risk breast cancer diagnosis and image-based prognostic prediction. The accurate and robust segmentation of cancerous regions is with clinical demands. However, automatic segmentation remains challenging, due to the large variations of cancers in shape and size, and the class-imbalance issue. To tackle these problems, we offer a two-stage framework, which leverages both pre- and post-contrast images for the segmentation of breast cancer. Specifically, we first employ a breast segmentation network, which generates the breast region of interest (ROI) thus removing confounding information from thorax region in DCE-MRI. Furthermore, based on the generated breast ROI, we offer an attention network to learn both pre- and post-contrast representations for distinguishing cancerous regions from the normal breast tissue. The efficacy of our framework is evaluated on a collected dataset of 261 patients with biopsy-proven breast cancers. Experimental results demonstrate our method attains a Dice coefficient of 91.11% for breast cancer segmentation. The proposed framework provides an effective cancer segmentation solution for breast examination using DCE-MRI. The code is publicly available at https://github.com/2313595986/BreastCancerMRI.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125859457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Breast Lesions Segmentation using Dual-level UNet (DL-UNet) 基于双级UNet的乳腺病变分割
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00067
Yanjiao Zhao, Zhihui Lai, Linlin Shen, Heng Kong
Breast disease is one of the primary diseases endangering women's health. Accurate segmentation of breast lesions can help doctors diagnose breast diseases. However, the size and morphology of breast lesions are different, and the intensity of breast tissue is uneven. Thus, it is challenging to segment the lesion area accurately. In this paper, we propose Dual-scale Feature Fusion (DSFF) module and Edgeloss to segment breast lesions. The DSFF module aims to integrate two-scale features and design another effective skip connection scheme to reduce false positive regions. To solve the problem of unclear segmentation boundary, we design Edgeloss for additional supervision on the boundary region to obtain a finer segmentation boundary. The experiment results show that the proposed DL-UNet with the DSFF module and new Edgeloss performs best in several classic networks.
乳腺疾病是危害妇女健康的主要疾病之一。乳腺病变的准确分割可以帮助医生诊断乳腺疾病。然而,乳腺病变的大小和形态不同,乳腺组织的强度也不均匀。因此,准确分割病变区域是一项挑战。本文提出了双尺度特征融合(DSFF)模块和edge - loss对乳腺病变进行分割。DSFF模块旨在整合双尺度特征,设计另一种有效的跳接方案,以减少误报区域。为了解决分割边界不清晰的问题,我们设计了Edgeloss对边界区域进行额外的监督,以获得更精细的分割边界。实验结果表明,采用DSFF模块和新边缘损耗的DL-UNet在几种经典网络中表现最好。
{"title":"Breast Lesions Segmentation using Dual-level UNet (DL-UNet)","authors":"Yanjiao Zhao, Zhihui Lai, Linlin Shen, Heng Kong","doi":"10.1109/CBMS55023.2022.00067","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00067","url":null,"abstract":"Breast disease is one of the primary diseases endangering women's health. Accurate segmentation of breast lesions can help doctors diagnose breast diseases. However, the size and morphology of breast lesions are different, and the intensity of breast tissue is uneven. Thus, it is challenging to segment the lesion area accurately. In this paper, we propose Dual-scale Feature Fusion (DSFF) module and Edgeloss to segment breast lesions. The DSFF module aims to integrate two-scale features and design another effective skip connection scheme to reduce false positive regions. To solve the problem of unclear segmentation boundary, we design Edgeloss for additional supervision on the boundary region to obtain a finer segmentation boundary. The experiment results show that the proposed DL-UNet with the DSFF module and new Edgeloss performs best in several classic networks.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"633 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131651738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TSEUnet: A 3D neural network with fused Transformer and SE-Attention for brain tumor segmentation TSEUnet:一种融合Transformer和SE-Attention的三维神经网络用于脑肿瘤分割
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00030
Yan-Min Chen, Jiajun Wang
Brain tumor segmentation of 3D magnetic resonance (MR) images is of great significance for brain diagnosis. Although the U-Net and its variants have achieved outstanding performance in medical image segmentation, there still exist some challenges somewhat due to the fact that the CNN based models are powerful in extracting local features but are powerless in capturing global representations. To tackle this problem, we propose a 3D network structure based on the nnUNet, named TSEUnet. In this network, the transformer module is introduced in the encoder in a parallel interactive manner so that both local features and global contexts can be efficiently extracted. Moreover, SE-Attention is also incorporated in the decoder to enhance the meaningful information and improve the segmentation accuracy for brain tumor area. In addition, we propose a post-processing method to further improve the brain tumor segmentation. Experiments on the BRATS 2018 dataset show that our proposed TSEUnet achieves better performance on brain tumor segmentation as compared with the state-of-the-art methods.
三维磁共振(MR)图像的脑肿瘤分割对脑部诊断具有重要意义。尽管U-Net及其变体在医学图像分割方面取得了优异的成绩,但由于基于CNN的模型在提取局部特征方面功能强大,而在捕获全局表征方面却无能为力,因此仍然存在一些挑战。为了解决这个问题,我们提出了一个基于nnUNet的三维网络结构,命名为TSEUnet。在该网络中,以并行交互的方式在编码器中引入了变压器模块,从而可以有效地提取局部特征和全局上下文。此外,解码器中还加入了SE-Attention,增强了有意义的信息,提高了对脑肿瘤区域的分割精度。此外,我们还提出了一种后处理方法来进一步提高脑肿瘤的分割效果。在BRATS 2018数据集上的实验表明,与现有方法相比,我们提出的TSEUnet在脑肿瘤分割方面取得了更好的效果。
{"title":"TSEUnet: A 3D neural network with fused Transformer and SE-Attention for brain tumor segmentation","authors":"Yan-Min Chen, Jiajun Wang","doi":"10.1109/CBMS55023.2022.00030","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00030","url":null,"abstract":"Brain tumor segmentation of 3D magnetic resonance (MR) images is of great significance for brain diagnosis. Although the U-Net and its variants have achieved outstanding performance in medical image segmentation, there still exist some challenges somewhat due to the fact that the CNN based models are powerful in extracting local features but are powerless in capturing global representations. To tackle this problem, we propose a 3D network structure based on the nnUNet, named TSEUnet. In this network, the transformer module is introduced in the encoder in a parallel interactive manner so that both local features and global contexts can be efficiently extracted. Moreover, SE-Attention is also incorporated in the decoder to enhance the meaningful information and improve the segmentation accuracy for brain tumor area. In addition, we propose a post-processing method to further improve the brain tumor segmentation. Experiments on the BRATS 2018 dataset show that our proposed TSEUnet achieves better performance on brain tumor segmentation as compared with the state-of-the-art methods.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"324 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132529343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Contrastive learning-based Adenoid Hypertrophy Grading Network Using Nasoendoscopic Image 基于鼻内镜图像对比学习的腺样体肥大分级网络
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00074
Siting Zheng, Xuechen Li, Mingmin Bi, Yuxuan Wang, Haiyan Liu, Xia Feng, Yunping Fan, Linlin Shen
Adenoid hypertrophy is a common disease in children with otolaryngology diseases. Otolaryngologists usually use nasoendoscopy for adenoid hypertrophy screening, which is however tedious and time-consuming for the grading. So far, artificial intelligence technology has not been applied to the grading of nasoendoscopic adenoid. In this work, we firstly propose a novel multi-scale grading network, MIB-ANet, for adenoid hypertrophy classification. And we further propose a contrastive learning-based network to alleviate the overfitting problem of the model caused by lacking of nasoendoscopic adenoid images with high-quality annotations. The experimental results show that MIB-ANet shows the best grading performance compared to four classic CNNs, i.e., AlexNet, VGG16, ResNet50 and GoogleNet. Take $F_{1}$ score as an example, MIB-ANet achieves 1.38% higher $F_{1}$ score than the best baseline CNN - AlexNet. Due to the capability of the contrastive learning-based pre-training strategy in exploring unannotated data, the pre-training using SimCLR pretext task can consistently improve the performance of MIB-ANet when different ratios of the labeled training data are employed. The MIB-ANet pre-trained by SimCLR pretext task achieves 4.41%, 2.64%, 3.10%, and 1.71% higher $F_{1}$ score when 25%, 50%, 75% and 100% of the training data are labeled, respectively.
腺样体肥大是儿童耳鼻喉科疾病的常见病。耳鼻喉科医师通常使用鼻内窥镜进行腺样体肥大筛查,但其分级繁琐且耗时。迄今为止,人工智能技术尚未应用于鼻内镜腺样体的分级。在这项工作中,我们首先提出了一个新的多尺度分级网络,MIB-ANet,用于腺样体肥大分类。我们进一步提出了一种基于对比学习的网络,以缓解由于缺乏高质量注释的鼻内镜腺样体图像而导致的模型过拟合问题。实验结果表明,与AlexNet、VGG16、ResNet50和GoogleNet四种经典cnn相比,MIB-ANet表现出最好的分级性能。以$F_{1}$分数为例,MIB-ANet的$F_{1}$分数比最佳基线CNN - AlexNet高1.38%。由于基于对比学习的预训练策略具有探索未标注数据的能力,使用SimCLR借口任务进行预训练可以在使用不同比例的标记训练数据时一致地提高MIB-ANet的性能。当25%、50%、75%和100%的训练数据被标记时,SimCLR借口任务预训练的MIB-ANet的得分分别提高了4.41%、2.64%、3.10%和1.71%。
{"title":"Contrastive learning-based Adenoid Hypertrophy Grading Network Using Nasoendoscopic Image","authors":"Siting Zheng, Xuechen Li, Mingmin Bi, Yuxuan Wang, Haiyan Liu, Xia Feng, Yunping Fan, Linlin Shen","doi":"10.1109/CBMS55023.2022.00074","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00074","url":null,"abstract":"Adenoid hypertrophy is a common disease in children with otolaryngology diseases. Otolaryngologists usually use nasoendoscopy for adenoid hypertrophy screening, which is however tedious and time-consuming for the grading. So far, artificial intelligence technology has not been applied to the grading of nasoendoscopic adenoid. In this work, we firstly propose a novel multi-scale grading network, MIB-ANet, for adenoid hypertrophy classification. And we further propose a contrastive learning-based network to alleviate the overfitting problem of the model caused by lacking of nasoendoscopic adenoid images with high-quality annotations. The experimental results show that MIB-ANet shows the best grading performance compared to four classic CNNs, i.e., AlexNet, VGG16, ResNet50 and GoogleNet. Take $F_{1}$ score as an example, MIB-ANet achieves 1.38% higher $F_{1}$ score than the best baseline CNN - AlexNet. Due to the capability of the contrastive learning-based pre-training strategy in exploring unannotated data, the pre-training using SimCLR pretext task can consistently improve the performance of MIB-ANet when different ratios of the labeled training data are employed. The MIB-ANet pre-trained by SimCLR pretext task achieves 4.41%, 2.64%, 3.10%, and 1.71% higher $F_{1}$ score when 25%, 50%, 75% and 100% of the training data are labeled, respectively.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132361442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A YOLO-based Object Simplification Approach for Visual Prostheses 基于yolo的视觉假体对象简化方法
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00039
Reham H. Elnabawy, Slim Abdennadher, O. Hellwich, S. Eldawlatly
Visual prostheses have been introduced to partially restore vision to the blind via visual pathway stimulation. Despite their success, some challenges have been reported by the implanted patients. One of those challenges is the difficulty of object recognition due to the low resolution of the images perceived through these devices. In this paper, a deep learning-based approach combined with image pre-processing is proposed to allow visual prostheses' users to recognize objects in a given scene. The approach simplifies the objects in the scene by displaying the objects in clip art form to enhance object recognition. These clip art images are generated by, first, identifying the objects in the scene using the You Only Look Once (YOLO) deep neural network. The clip art corresponding to each identified object is then retrieved via Google Images. Three experiments were conducted to measure the success of the proposed approach using simulated prosthetic vision. Our results reveal a remarkable decrease in the recognition time, increase in the recognition accuracy and confidence level when using the clip art representation as opposed to using the actual images of the objects. These results demonstrate the utility of object simplification in enhancing the perception of images in prosthetic vision.
视觉假体通过视觉通路刺激来部分恢复盲人的视力。尽管他们取得了成功,但植入患者也报告了一些挑战。其中一个挑战是物体识别的困难,因为通过这些设备感知的图像分辨率很低。本文提出了一种基于深度学习与图像预处理相结合的方法,使视觉假体的用户能够识别给定场景中的物体。该方法通过将对象以剪贴画的形式显示来简化场景中的对象,从而增强对象的识别能力。首先,这些剪贴画图像是通过使用You Only Look Once (YOLO)深度神经网络识别场景中的对象生成的。然后通过Google Images检索每个识别对象对应的剪贴画。通过模拟假体视觉进行了三个实验来衡量所提出方法的成功。我们的结果显示,与使用对象的实际图像相比,使用剪贴画表示显著减少了识别时间,提高了识别精度和置信度。这些结果证明了物体简化在增强假肢视觉图像感知方面的效用。
{"title":"A YOLO-based Object Simplification Approach for Visual Prostheses","authors":"Reham H. Elnabawy, Slim Abdennadher, O. Hellwich, S. Eldawlatly","doi":"10.1109/CBMS55023.2022.00039","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00039","url":null,"abstract":"Visual prostheses have been introduced to partially restore vision to the blind via visual pathway stimulation. Despite their success, some challenges have been reported by the implanted patients. One of those challenges is the difficulty of object recognition due to the low resolution of the images perceived through these devices. In this paper, a deep learning-based approach combined with image pre-processing is proposed to allow visual prostheses' users to recognize objects in a given scene. The approach simplifies the objects in the scene by displaying the objects in clip art form to enhance object recognition. These clip art images are generated by, first, identifying the objects in the scene using the You Only Look Once (YOLO) deep neural network. The clip art corresponding to each identified object is then retrieved via Google Images. Three experiments were conducted to measure the success of the proposed approach using simulated prosthetic vision. Our results reveal a remarkable decrease in the recognition time, increase in the recognition accuracy and confidence level when using the clip art representation as opposed to using the actual images of the objects. These results demonstrate the utility of object simplification in enhancing the perception of images in prosthetic vision.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116922419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Data Augmentation Methods For Object Detection and Segmentation In Ultrasound Scans: An Empirical Comparative Study 超声扫描中目标检测与分割的数据增强方法:实证比较研究
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00057
Sachintha R. Brandigampala, Abdullah F. Al-Battal, Truong Q. Nguyen
In ultrasound imaging, sonographers are tasked with analyzing scans for diagnostic purposes; a challenging task, especially for novice sonographers. Deep Learning methods have shown great potential in their ability to infer semantics and key information from scans to assist with these tasks. However, deep learning methods require large training sets to accomplish tasks such as segmentation and object detection. Generating these large datasets is a significant challenge in the medical domain due to the high cost of acquisition and annotation. Therefore, data augmentation is used to increase the size of training datasets to create the needed variability for deep learning models to generalize. These augmentation methods try to mimic differences among scans that result from noise, tissue movement, acquisition settings, and others. In this paper, we analyze the effectiveness of general augmentation methods that perform color, rigid, and non-rigid geometric transformation, to empirically analyze and compare their ability to improve the performance of three segmentation architectures on three different ultrasound datasets. We observe that non-rigid geometric transformations produce the best performance improvement.
在超声成像中,超声技师的任务是分析扫描结果以进行诊断;这是一项具有挑战性的任务,尤其是对超声诊察新手而言。深度学习方法在从扫描中推断语义和关键信息以协助完成这些任务的能力方面显示出巨大的潜力。然而,深度学习方法需要大量的训练集来完成分割和目标检测等任务。由于获取和注释的高成本,在医学领域生成这些大型数据集是一个重大挑战。因此,数据增强用于增加训练数据集的大小,以创建深度学习模型进行泛化所需的可变性。这些增强方法试图模拟由噪声、组织运动、采集设置等引起的扫描差异。在本文中,我们分析了执行彩色、刚性和非刚性几何变换的一般增强方法的有效性,以经验分析和比较它们在三种不同超声数据集上提高三种分割架构性能的能力。我们观察到非刚性几何变换产生最佳的性能改进。
{"title":"Data Augmentation Methods For Object Detection and Segmentation In Ultrasound Scans: An Empirical Comparative Study","authors":"Sachintha R. Brandigampala, Abdullah F. Al-Battal, Truong Q. Nguyen","doi":"10.1109/CBMS55023.2022.00057","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00057","url":null,"abstract":"In ultrasound imaging, sonographers are tasked with analyzing scans for diagnostic purposes; a challenging task, especially for novice sonographers. Deep Learning methods have shown great potential in their ability to infer semantics and key information from scans to assist with these tasks. However, deep learning methods require large training sets to accomplish tasks such as segmentation and object detection. Generating these large datasets is a significant challenge in the medical domain due to the high cost of acquisition and annotation. Therefore, data augmentation is used to increase the size of training datasets to create the needed variability for deep learning models to generalize. These augmentation methods try to mimic differences among scans that result from noise, tissue movement, acquisition settings, and others. In this paper, we analyze the effectiveness of general augmentation methods that perform color, rigid, and non-rigid geometric transformation, to empirically analyze and compare their ability to improve the performance of three segmentation architectures on three different ultrasound datasets. We observe that non-rigid geometric transformations produce the best performance improvement.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117325041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethically Informed Software Process for Smart Health Home 智能健康家庭的伦理知情软件流程
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00040
Xiang Zhang, M. Pike, Nasser Mustafa, V. Brusic
Smart health homes (SHHs) integrate wearable sensors and various interconnected devices using the Internet of Things (IoT) technologies. SHHs combine IoT, data communication, and health-related applications to deliver healthcare services at home. The existing regulations and standards for SHH design are insufficient for home health care. Technical and device standards are available for guiding SHH design and implementation, but ethical standards are lacking. We identified six ethical requirements important for SHH: safety/trust, privacy/data security, vulnerable groups, individual autonomy, transparency/explainability/fairness, and social responsibility/ morality. We identified a set of questions useful for software engineering (SE) process for ethically informed software in SHH design and mapped them to the steps of software process. We mapped related guidelines from relevant professional codes of conduct. These questions can guide ethically informed software process of SHH.
智能健康之家(SHHs)使用物联网(IoT)技术集成可穿戴传感器和各种互联设备。shh结合物联网、数据通信和健康相关应用,在家中提供医疗保健服务。现有的嘘嘘设计法规和标准不足以满足家庭保健需求。技术和设备标准可用于指导嘘嘘的设计和实施,但缺乏道德标准。我们确定了对嘘嘘重要的六个道德要求:安全/信任、隐私/数据安全、弱势群体、个人自主权、透明度/可解释性/公平性以及社会责任/道德。我们确定了一组对软件工程(SE)过程有用的问题,以便在SHH设计中对道德知情的软件进行处理,并将它们映射到软件过程的步骤。我们从相关的专业行为准则中制定了相关的指导方针。这些问题可以指导嘘嘘的伦理知情软件过程。
{"title":"Ethically Informed Software Process for Smart Health Home","authors":"Xiang Zhang, M. Pike, Nasser Mustafa, V. Brusic","doi":"10.1109/CBMS55023.2022.00040","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00040","url":null,"abstract":"Smart health homes (SHHs) integrate wearable sensors and various interconnected devices using the Internet of Things (IoT) technologies. SHHs combine IoT, data communication, and health-related applications to deliver healthcare services at home. The existing regulations and standards for SHH design are insufficient for home health care. Technical and device standards are available for guiding SHH design and implementation, but ethical standards are lacking. We identified six ethical requirements important for SHH: safety/trust, privacy/data security, vulnerable groups, individual autonomy, transparency/explainability/fairness, and social responsibility/ morality. We identified a set of questions useful for software engineering (SE) process for ethically informed software in SHH design and mapped them to the steps of software process. We mapped related guidelines from relevant professional codes of conduct. These questions can guide ethically informed software process of SHH.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123512633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Classification of cardiac cohorts based on morphological and hemodynamic features derived from 4D PC-MRI data 基于4D PC-MRI数据的形态学和血流动力学特征的心脏队列分类
Pub Date : 2022-07-01 DOI: 10.1109/CBMS55023.2022.00081
Uli Niemann, Atrayee Neog, B. Behrendt, K. Lawonn, M. Gutberlet, M. Spiliopoulou, B. Preim, M. Meuschke
An accurate assessment of the cardiovascular system and prediction of cardiovascular diseases (CVDs) are crucial. Cardiac blood flow data provide insights about patient-specific hemodynamics. However, there is a lack of machine learning approaches for a feature-based classification of heart-healthy people and patients with CVDs. In this paper, we investigate the potential of morphological and hemodynamic features extracted from measured blood flow data in the aorta to classify heart-healthy volunteers (HHV) and patients with bicuspid aortic valve (BAV). Furthermore, we determine features that distinguish male vs. female patients and elderly HHV vs. BAV patients. We propose a data analysis pipeline for cardiac status classification, encompassing feature selection, model training, and hyperparameter tuning. Our results suggest substantial differences in flow features of the aorta between HHV and BAV patients. The excellent performance of the classifiers separating between elderly HHV and BAV patients indicates that aging is not associated with pathological morphology and hemodynamics. Our models represent a first step towards automated diagnosis of CVS using interpretable machine learning models.
准确评估心血管系统和预测心血管疾病(cvd)至关重要。心脏血流数据提供了对患者特异性血流动力学的见解。然而,目前还缺乏机器学习方法来对心脏健康的人和心血管病患者进行基于特征的分类。在本文中,我们研究了从主动脉测量血流数据中提取的形态学和血流动力学特征对心脏健康志愿者(HHV)和双尖瓣主动脉瓣膜(BAV)患者进行分类的潜力。此外,我们确定了区分男性和女性患者以及老年HHV和BAV患者的特征。我们提出了一种用于心脏状态分类的数据分析管道,包括特征选择,模型训练和超参数调整。我们的研究结果表明,HHV和BAV患者的主动脉血流特征存在显著差异。老年HHV和BAV患者的分类器的优异表现表明,年龄与病理形态和血流动力学无关。我们的模型代表了使用可解释的机器学习模型自动诊断CVS的第一步。
{"title":"Classification of cardiac cohorts based on morphological and hemodynamic features derived from 4D PC-MRI data","authors":"Uli Niemann, Atrayee Neog, B. Behrendt, K. Lawonn, M. Gutberlet, M. Spiliopoulou, B. Preim, M. Meuschke","doi":"10.1109/CBMS55023.2022.00081","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00081","url":null,"abstract":"An accurate assessment of the cardiovascular system and prediction of cardiovascular diseases (CVDs) are crucial. Cardiac blood flow data provide insights about patient-specific hemodynamics. However, there is a lack of machine learning approaches for a feature-based classification of heart-healthy people and patients with CVDs. In this paper, we investigate the potential of morphological and hemodynamic features extracted from measured blood flow data in the aorta to classify heart-healthy volunteers (HHV) and patients with bicuspid aortic valve (BAV). Furthermore, we determine features that distinguish male vs. female patients and elderly HHV vs. BAV patients. We propose a data analysis pipeline for cardiac status classification, encompassing feature selection, model training, and hyperparameter tuning. Our results suggest substantial differences in flow features of the aorta between HHV and BAV patients. The excellent performance of the classifiers separating between elderly HHV and BAV patients indicates that aging is not associated with pathological morphology and hemodynamics. Our models represent a first step towards automated diagnosis of CVS using interpretable machine learning models.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116164233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1