首页 > 最新文献

2022 4th International Conference on Biomedical Engineering (IBIOMED)最新文献

英文 中文
Cervical Cancer Image Processing with Convolutional Neural Network for Detection 基于卷积神经网络的宫颈癌检测图像处理
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988514
A. A. Iskandar, Elnora Listianto Lie, K. A. Audah, Rose Khasana Dewi
The diagnostic method for detecting cervical cancer using Pap smear can be laborious and time-consuming. Therefore, research on computer-aided diagnosis is essential. The purpose of this study is to aid the distinguishing of Pap smear images from various categories of cervical cells by creating an alternative image processing and classification method. This is so that in the future, the burden on pathologists to manually analyze many Pap smear images can be reduced. The developed method will be able to help in the detection of abnormality or cancer. The processing methods include Gaussian filtering, Otsu thresholding, Canny edge detection, and Convolutional Neural Network. The analytical methods utilized were accuracy and loss curves, and the evaluation measures of accuracy, precision, recall, and F1 measure. The most optimal trained model had an accuracy, precision, recall, and F1 measure of 93.26%, 92.55%, 91.52%, and 91.84% respectively. It was concluded that the image processing and classification method could be used to distinguish multi-cell Pap smear images. Even with some limitations, it has the potential to improve single-cell analysis and also aid in classification. In the future, this method may be used in the medical field to help diagnose cervical cancer in Indonesia.
使用子宫颈抹片检测子宫颈癌的诊断方法既费力又费时。因此,研究计算机辅助诊断是十分必要的。本研究的目的是通过创建一种替代的图像处理和分类方法来帮助区分巴氏涂片图像与各种类型的宫颈细胞。这样,在未来,病理学家手动分析许多巴氏涂片图像的负担可以减少。开发的方法将能够帮助检测异常或癌症。处理方法包括高斯滤波、Otsu阈值法、Canny边缘检测和卷积神经网络。分析方法为准确度曲线和损失曲线,评价指标为准确度、精密度、召回率和F1指标。最优训练模型的准确率、精密度、召回率和F1测度分别为93.26%、92.55%、91.52%和91.84%。结果表明,该图像处理与分类方法可用于多细胞巴氏涂片图像的鉴别。即使有一些限制,它也有可能改善单细胞分析,并有助于分类。在未来,这种方法可能会被用于医疗领域,以帮助诊断宫颈癌在印度尼西亚。
{"title":"Cervical Cancer Image Processing with Convolutional Neural Network for Detection","authors":"A. A. Iskandar, Elnora Listianto Lie, K. A. Audah, Rose Khasana Dewi","doi":"10.1109/IBIOMED56408.2022.9988514","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988514","url":null,"abstract":"The diagnostic method for detecting cervical cancer using Pap smear can be laborious and time-consuming. Therefore, research on computer-aided diagnosis is essential. The purpose of this study is to aid the distinguishing of Pap smear images from various categories of cervical cells by creating an alternative image processing and classification method. This is so that in the future, the burden on pathologists to manually analyze many Pap smear images can be reduced. The developed method will be able to help in the detection of abnormality or cancer. The processing methods include Gaussian filtering, Otsu thresholding, Canny edge detection, and Convolutional Neural Network. The analytical methods utilized were accuracy and loss curves, and the evaluation measures of accuracy, precision, recall, and F1 measure. The most optimal trained model had an accuracy, precision, recall, and F1 measure of 93.26%, 92.55%, 91.52%, and 91.84% respectively. It was concluded that the image processing and classification method could be used to distinguish multi-cell Pap smear images. Even with some limitations, it has the potential to improve single-cell analysis and also aid in classification. In the future, this method may be used in the medical field to help diagnose cervical cancer in Indonesia.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122350711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Quantification of Type III Collagen Deposition Density from Photomicrograph of Vaginal Connective Tissue 阴道结缔组织显微照片中III型胶原沉积密度的定量测定
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988366
Muhammad Arfan, H. Zakaria
Visualization has always aided clinical trial diagnoses. The majority of observations are, unfortunately, performed manually. Repeatability, samples, and effort are necessary for quantitative research. More samples complicate the process. A density study of type III collagen deposition was manually performed on 105 samples using ImageJ on photomicrographs by adjusting the deposition color in a binary image. Manually examining photomicrographs for collagen fiber density is time-consuming and tiring. This study automatically quantifies the type III collagen deposition density using CellProfiler, which does not require skill in observing large samples and complex research obj ects, thus enabling a less time-consuming technique. This study equalizes illumination and reduces photomicrograph noise to help identify cells. The line and tubeness features are improved to enhance the pixel intensity and collagen fiber structure. CellProfiler processed 105 photos in eight minutes, 57 seconds, or 5,1 seconds each. ImageJ required 114 seconds per photomicrograph or 129,5 minutes total (depending on the accuracy of the researchers). CellProfiler accelerated image processing by 14,5 times. Comparing the calculations of CellProfiler and ImageJ using linear regression yielded R2 = 0,7786, indicating a strong relationship. In addition, it produced the equation y = 0.9548x + 1.2197, indicating a positive correlation. This strong relationship and positive correlation suggested that CellProfiler's automatic quantification could assist researchers in measuring complex cells like collagen fiber structure in a less time-consuming technique.
可视化一直有助于临床试验诊断。不幸的是,大多数观察都是手动执行的。可重复性、样品和努力是定量研究的必要条件。更多的样本使这一过程复杂化。通过调整二值图像中的沉积颜色,使用ImageJ对105个样品进行了III型胶原沉积的密度研究。人工检查显微照片中的胶原纤维密度既费时又累人。本研究使用CellProfiler自动定量III型胶原沉积密度,不需要观察大样本和复杂研究对象的技能,从而节省了时间。这项研究平衡光照和减少显微照片噪声,以帮助识别细胞。改进了线条和管状特征,增强了像素强度和胶原纤维结构。CellProfiler在8分57秒内处理了105张照片,即每张5.1秒。ImageJ每张显微照片需要114秒,或者总共需要129.5分钟(取决于研究人员的准确性)。CellProfiler将图像处理速度提高了14.5倍。使用线性回归比较CellProfiler和ImageJ的计算结果得出R2 = 0,7786,表明两者之间存在很强的关系。此外,它产生了方程y = 0.9548x + 1.2197,表明正相关。这种强相关性和正相关性表明,CellProfiler的自动定量可以帮助研究人员以更少的时间来测量复杂的细胞,如胶原纤维结构。
{"title":"Quantification of Type III Collagen Deposition Density from Photomicrograph of Vaginal Connective Tissue","authors":"Muhammad Arfan, H. Zakaria","doi":"10.1109/IBIOMED56408.2022.9988366","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988366","url":null,"abstract":"Visualization has always aided clinical trial diagnoses. The majority of observations are, unfortunately, performed manually. Repeatability, samples, and effort are necessary for quantitative research. More samples complicate the process. A density study of type III collagen deposition was manually performed on 105 samples using ImageJ on photomicrographs by adjusting the deposition color in a binary image. Manually examining photomicrographs for collagen fiber density is time-consuming and tiring. This study automatically quantifies the type III collagen deposition density using CellProfiler, which does not require skill in observing large samples and complex research obj ects, thus enabling a less time-consuming technique. This study equalizes illumination and reduces photomicrograph noise to help identify cells. The line and tubeness features are improved to enhance the pixel intensity and collagen fiber structure. CellProfiler processed 105 photos in eight minutes, 57 seconds, or 5,1 seconds each. ImageJ required 114 seconds per photomicrograph or 129,5 minutes total (depending on the accuracy of the researchers). CellProfiler accelerated image processing by 14,5 times. Comparing the calculations of CellProfiler and ImageJ using linear regression yielded R2 = 0,7786, indicating a strong relationship. In addition, it produced the equation y = 0.9548x + 1.2197, indicating a positive correlation. This strong relationship and positive correlation suggested that CellProfiler's automatic quantification could assist researchers in measuring complex cells like collagen fiber structure in a less time-consuming technique.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132155078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoT Based Pre-Operative Prehabilitation Program Monitoring Model: Implementation and Preliminary Evaluation 基于物联网的术前康复项目监测模型:实施与初步评估
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988432
K. Al-Naime, A. Al-Anbuky, G. Mawston
Abdominal cancer is one of the most frequent and dangerous cancers in the world, particularly among the elderly. Major surgery is associated with a significant deterioration in quality of life. Physical fitness and level of activity are considered important factors for patients with cancer undergoing major abdominal surgery. One of the main programmes designed to improve the patient's fitness before major surgery is physical exercises (prehabilitation). However, significant numbers of patients undergoing major surgery cannot implement the programs due to limited health service resources, or patients are living in remote locations. This paper discusses a novel IoT concept for precision prehabilitation program monitoring. The solution integrates the traditional 6-week program's follow-up mechanism with the IoT system. This in turn tracks the patient's movement activities anytime anywhere and records the significant movements specific to the program. As a result, both the patients and the health system are relieved from the restricted capacity and associated cost. A wearable sensor was placed on the participant's ankle, and a gateway and the ThingSpeak platform were developed to perform IoT remote monitoring techniques. The key outcome is the visibility of IoT system to support mixed mode prehabilitation program by reducing the barriers and obstacles of existing prehabilitation programs.
腹部癌症是世界上最常见和最危险的癌症之一,特别是在老年人中。大手术与生活质量的显著恶化有关。身体健康和活动水平被认为是接受腹部大手术的癌症患者的重要因素。在大手术前改善病人健康的主要方案之一是体育锻炼(康复)。然而,由于卫生服务资源有限或患者居住在偏远地区,大量接受大手术的患者无法实施这些计划。本文讨论了一种新的物联网概念,用于精确预康复计划监测。该解决方案将传统的为期6周的项目跟踪机制与物联网系统相结合。这反过来可以随时随地跟踪病人的运动活动,并记录特定于该计划的重要运动。因此,患者和卫生系统都可以从有限的能力和相关成本中解脱出来。在参与者的脚踝上放置了一个可穿戴传感器,并开发了一个网关和ThingSpeak平台来执行物联网远程监控技术。关键成果是物联网系统的可见性,通过减少现有预康复计划的障碍和障碍来支持混合模式预康复计划。
{"title":"IoT Based Pre-Operative Prehabilitation Program Monitoring Model: Implementation and Preliminary Evaluation","authors":"K. Al-Naime, A. Al-Anbuky, G. Mawston","doi":"10.1109/IBIOMED56408.2022.9988432","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988432","url":null,"abstract":"Abdominal cancer is one of the most frequent and dangerous cancers in the world, particularly among the elderly. Major surgery is associated with a significant deterioration in quality of life. Physical fitness and level of activity are considered important factors for patients with cancer undergoing major abdominal surgery. One of the main programmes designed to improve the patient's fitness before major surgery is physical exercises (prehabilitation). However, significant numbers of patients undergoing major surgery cannot implement the programs due to limited health service resources, or patients are living in remote locations. This paper discusses a novel IoT concept for precision prehabilitation program monitoring. The solution integrates the traditional 6-week program's follow-up mechanism with the IoT system. This in turn tracks the patient's movement activities anytime anywhere and records the significant movements specific to the program. As a result, both the patients and the health system are relieved from the restricted capacity and associated cost. A wearable sensor was placed on the participant's ankle, and a gateway and the ThingSpeak platform were developed to perform IoT remote monitoring techniques. The key outcome is the visibility of IoT system to support mixed mode prehabilitation program by reducing the barriers and obstacles of existing prehabilitation programs.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114386661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Microfluidic Channel for Separation of Circulating Tumor Cells from Blood Cells Using Dielectrophoresis and Its Performance Analysis Using Adaptive Neuro-Fuzzy Inference System 一种用于循环肿瘤细胞与血细胞分离的微流控通道及其自适应神经模糊推理系统的性能分析
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988631
Mir Ashib Ullah, Fazlul Rafeeun Khorshed, Md. Ruhul Amin
In this work, a simple electrode arrangement is proposed for a microfluidic channel that utilizes dielectrophoresis and fluid dynamics to separate circulating tumor cells (CTCs) from blood cells that can be used effectively in microfluidic channels. Dielectrophoresis mechanism aided the microfluidic channel that has been made considering the Clausius-Mossotti (CM) factor, electrical and other mechanical properties of white blood cell (WBC), red blood cell (RBC) and CTC particles to accumulate the rare CTCs being isolated from WBCs and RBCs to a specified outlet. A comparative analysis of the microfluidic channel for various ranges of inlet velocity and applied electric fields by computer-assisted multi-physics simulations using the Finite Element Method (FEM) with various governing parameters using COMSOL, MATLAB, and MyDEP software has been done in this study to validate the performance of the proposed microfluidic channel which showed 100% separation efficiency (SE) and separation purity (SP) for 4V peak-to-peak applied voltage on the electrodes. Analysis of the inputs and outputs from the simulation model has been done to suggest specific values of inputs for the most efficient separation of the microfluidic channel through Adaptive Neuro-Fuzzy Inference System (ANFIS).
在这项工作中,提出了一种简单的电极安排,用于微流控通道,利用介电电泳和流体动力学将循环肿瘤细胞(ctc)从可有效用于微流控通道的血细胞中分离出来。考虑克劳修斯-莫索蒂(clusius - mossotti, CM)因子、白细胞(WBC)、红细胞(RBC)和CTC颗粒的电学和其他力学性能,建立了微流控通道,介电泳机制有助于微流控通道将从白细胞和红细胞中分离出的稀有CTC积聚到指定的出口。本研究利用COMSOL、MATLAB和MyDEP软件,利用计算机辅助多物理场模拟方法(FEM)对不同进口速度和外加电场范围内的微流控通道进行了对比分析,并对不同控制参数下的微流控通道进行了仿真,验证了所提出的微流控通道在电极上施加4V峰对峰电压时的分离效率(SE)和分离纯度(SP)为100%的性能。对仿真模型的输入和输出进行了分析,提出了通过自适应神经模糊推理系统(ANFIS)实现最有效微流控通道分离的具体输入值。
{"title":"A Microfluidic Channel for Separation of Circulating Tumor Cells from Blood Cells Using Dielectrophoresis and Its Performance Analysis Using Adaptive Neuro-Fuzzy Inference System","authors":"Mir Ashib Ullah, Fazlul Rafeeun Khorshed, Md. Ruhul Amin","doi":"10.1109/IBIOMED56408.2022.9988631","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988631","url":null,"abstract":"In this work, a simple electrode arrangement is proposed for a microfluidic channel that utilizes dielectrophoresis and fluid dynamics to separate circulating tumor cells (CTCs) from blood cells that can be used effectively in microfluidic channels. Dielectrophoresis mechanism aided the microfluidic channel that has been made considering the Clausius-Mossotti (CM) factor, electrical and other mechanical properties of white blood cell (WBC), red blood cell (RBC) and CTC particles to accumulate the rare CTCs being isolated from WBCs and RBCs to a specified outlet. A comparative analysis of the microfluidic channel for various ranges of inlet velocity and applied electric fields by computer-assisted multi-physics simulations using the Finite Element Method (FEM) with various governing parameters using COMSOL, MATLAB, and MyDEP software has been done in this study to validate the performance of the proposed microfluidic channel which showed 100% separation efficiency (SE) and separation purity (SP) for 4V peak-to-peak applied voltage on the electrodes. Analysis of the inputs and outputs from the simulation model has been done to suggest specific values of inputs for the most efficient separation of the microfluidic channel through Adaptive Neuro-Fuzzy Inference System (ANFIS).","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127584405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acute Lymphoblastic Leukemia Image Classification Performance with Transfer Learning Using CNN Architecture 基于CNN架构的迁移学习的急性淋巴细胞白血病图像分类性能
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988690
Aiman Muhamad Basymeleh, Bagus Esa Pramudya, Reinato Teguh Santoso
Leukemia is diagnosed by observing two indicators, bone marrow smear and peripheral blood smear with laboratory skills using a microscope for diagnosing cancer. All diagnostics tests require advanced laboratory tests and another limitations like time and pricing. With all limitations, this study compares deep learning architectures from image augmentation from HSV images for diagnosis and classification for four label outputs using Adam optimizer. As a result of this study, VGG16 achieved better evaluation results than another architecture which attained an accuracy, sensitivity, specificity, and validation accuracy of 97.50%, 99.96%, 100%, and 98.44%, respectively. For its development in real cases, the modeling can be applied directly to the relevant in the future or using a new novel method architecture.
白血病的诊断是通过观察骨髓涂片和外周血涂片两项指标,运用诊断癌症的显微镜的实验室技巧。所有诊断测试都需要先进的实验室测试以及时间和价格等其他限制。考虑到所有的局限性,本研究比较了来自HSV图像增强的深度学习架构,使用Adam优化器对四个标签输出进行诊断和分类。本研究结果表明,VGG16的评价结果优于另一种体系结构,其准确性、灵敏度、特异性和验证精度分别为97.50%、99.96%、100%和98.44%。对于其在实际案例中的发展,该建模可以直接应用于相关的未来或使用新的方法体系结构。
{"title":"Acute Lymphoblastic Leukemia Image Classification Performance with Transfer Learning Using CNN Architecture","authors":"Aiman Muhamad Basymeleh, Bagus Esa Pramudya, Reinato Teguh Santoso","doi":"10.1109/IBIOMED56408.2022.9988690","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988690","url":null,"abstract":"Leukemia is diagnosed by observing two indicators, bone marrow smear and peripheral blood smear with laboratory skills using a microscope for diagnosing cancer. All diagnostics tests require advanced laboratory tests and another limitations like time and pricing. With all limitations, this study compares deep learning architectures from image augmentation from HSV images for diagnosis and classification for four label outputs using Adam optimizer. As a result of this study, VGG16 achieved better evaluation results than another architecture which attained an accuracy, sensitivity, specificity, and validation accuracy of 97.50%, 99.96%, 100%, and 98.44%, respectively. For its development in real cases, the modeling can be applied directly to the relevant in the future or using a new novel method architecture.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134303504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
NLP Analysis of COVID-19 Radiology Reports in Indonesian using IndoBERT 使用IndoBERT对印度尼西亚COVID-19放射学报告进行NLP分析
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988223
N. N. Qomariyah, Tianda Sun, D. Kazakov
The presence of COVID-19, a respiratory disease, can be detected through medical imaging, such as Chest X-Ray (CXR) and Computed Tomography (CT) scans. These radiology images can also show how the patient's condition progresses. Radiologists need to provide a written report for each image, so that other clinicians can use it in their decision making. In this study, we applied one of the Natural Language Processing (NLP) models called IndoBERT to analyze radiology reports of COVID-19 patients written in Indonesian. We performed two tasks, clustering to group reports by meaning and understand their content, and text classification to predict one of the five possible outcomes for each patient. We show the most frequent topics in radiology reports, and word scores in each topic. The IndoBERT model was fine tuned on a medical text, ‘Kamus Kedokteran Dorland’ in an attempt to further improve it. This proved unnecessary: on one hand, there were no additional benefits, on the other, the standard model alone achieved a very satisfactory classification accuracy of over 90 %.
COVID-19是一种呼吸道疾病,可通过医学成像检测,如胸部x射线(CXR)和计算机断层扫描(CT)扫描。这些放射影像也可以显示病人的病情进展。放射科医生需要为每张图像提供书面报告,以便其他临床医生在决策时使用它。在这项研究中,我们应用了一种名为IndoBERT的自然语言处理(NLP)模型来分析用印尼语写的COVID-19患者的放射学报告。我们执行了两项任务,通过聚类对报告进行意义分组并理解其内容,以及文本分类以预测每位患者的五种可能结果之一。我们显示了放射学报告中最常见的主题,以及每个主题的单词得分。IndoBERT模型在医学文本《Kamus Kedokteran Dorland》上进行了微调,试图进一步改进它。这被证明是不必要的:一方面,没有额外的好处,另一方面,标准模型单独实现了超过90%的非常令人满意的分类精度。
{"title":"NLP Analysis of COVID-19 Radiology Reports in Indonesian using IndoBERT","authors":"N. N. Qomariyah, Tianda Sun, D. Kazakov","doi":"10.1109/IBIOMED56408.2022.9988223","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988223","url":null,"abstract":"The presence of COVID-19, a respiratory disease, can be detected through medical imaging, such as Chest X-Ray (CXR) and Computed Tomography (CT) scans. These radiology images can also show how the patient's condition progresses. Radiologists need to provide a written report for each image, so that other clinicians can use it in their decision making. In this study, we applied one of the Natural Language Processing (NLP) models called IndoBERT to analyze radiology reports of COVID-19 patients written in Indonesian. We performed two tasks, clustering to group reports by meaning and understand their content, and text classification to predict one of the five possible outcomes for each patient. We show the most frequent topics in radiology reports, and word scores in each topic. The IndoBERT model was fine tuned on a medical text, ‘Kamus Kedokteran Dorland’ in an attempt to further improve it. This proved unnecessary: on one hand, there were no additional benefits, on the other, the standard model alone achieved a very satisfactory classification accuracy of over 90 %.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116974889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breast Cancer Image Pre-Processing With Convolutional Neural Network For Detection and Classification 基于卷积神经网络的乳腺癌图像检测与分类预处理
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988446
A. A. Iskandar, M. Jeremy, M. Fathony
Breast Cancer is one of the most common types of cancer. This research was conducted with the purpose of developing a Computer-Aided Diagnosis to detect breast cancers from mammogram images. The mammogram images were obtained from the INbreast Dataset and Husada Hospital in Jakarta. The program was developed with the usage of pre-processing which includes Median Filtering, Otsu thresholding, Truncation Normalization, and Contrast Limited Adaptive Histogram Equalization to manipulate the images and Convolutional Neural Network to classify the images into either mass or normal, or either benign or malignant. The pre-processing pipeline have provided enhanced images to be used to train and test the Convolutional Neural Network. The best model achieved reached an accuracy, precision and sensitivity of 94.1%, 100% and 85.7% in classifying the mammogram images into benign or malignant, and 88.3%, 92.6% and 83.3% in classifying the mammogram images into mass or normal. In conclusion, the algorithm was able to classify mammogram images and has provided results as high as other related researches.
乳腺癌是最常见的癌症之一。这项研究的目的是开发一种计算机辅助诊断,从乳房x光照片中检测乳腺癌。乳房x光图像来自INbreast数据集和雅加达Husada医院。该程序使用预处理,包括中值滤波,Otsu阈值,截断归一化和对比度有限的自适应直方图均衡化来处理图像,并使用卷积神经网络将图像分类为肿块或正常,或良性或恶性。预处理流水线提供了增强图像,用于训练和测试卷积神经网络。所获得的最佳模型对乳腺x线图像良恶性分类的准确率、精密度和灵敏度分别为94.1%、100%和85.7%,对肿块和正常分类的准确率、精密度和灵敏度分别为88.3%、92.6%和83.3%。综上所述,该算法能够对乳房x光图像进行分类,并提供了与其他相关研究一样高的结果。
{"title":"Breast Cancer Image Pre-Processing With Convolutional Neural Network For Detection and Classification","authors":"A. A. Iskandar, M. Jeremy, M. Fathony","doi":"10.1109/IBIOMED56408.2022.9988446","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988446","url":null,"abstract":"Breast Cancer is one of the most common types of cancer. This research was conducted with the purpose of developing a Computer-Aided Diagnosis to detect breast cancers from mammogram images. The mammogram images were obtained from the INbreast Dataset and Husada Hospital in Jakarta. The program was developed with the usage of pre-processing which includes Median Filtering, Otsu thresholding, Truncation Normalization, and Contrast Limited Adaptive Histogram Equalization to manipulate the images and Convolutional Neural Network to classify the images into either mass or normal, or either benign or malignant. The pre-processing pipeline have provided enhanced images to be used to train and test the Convolutional Neural Network. The best model achieved reached an accuracy, precision and sensitivity of 94.1%, 100% and 85.7% in classifying the mammogram images into benign or malignant, and 88.3%, 92.6% and 83.3% in classifying the mammogram images into mass or normal. In conclusion, the algorithm was able to classify mammogram images and has provided results as high as other related researches.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122112112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impact of Filtering for Breast Ultrasound Segmentation using A Visual Attention Model 视觉注意模型对乳腺超声分割滤波的影响
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9988361
D. N. K. Hardani, H. A. Nugroho, I. Ardiyanto
Breast cancer can threaten women's health and become a cause of death. Reducing mortality from breast cancer necessitates early recognition of its signs and symptoms. An essential step in building an early detection system is to segment the breast ultrasound image (BUS). The accuracy of segmentation has a direct bearing on the effectiveness of quantitative analysis and the detection of breast tumor. However, this image segmentation becomes constrained because the BUS image has a shallow quality. Therefore, it is necessary to take preprocessing steps to improve the image. This study aims to compare the efficiency of various filtering techniques for BUS segmentation with the visual attention model. There are 12 filters tested in this study, including Mean, Median, Bilateral, Fast nonlinear, Lee, Lee-enhance, Frost, Kuan, Gamma, Wiener, Speckle Reduction Anisotropic Diffusion Filter (SRAD), and Detail Preserved Anisotropic Diffusion Filter (DPAD). The segmentation process uses a Convolutional Neural Network (CNN) based network architecture, namely Visual Geometry Group architecture with 16 layers (VGG-16). The segmentation results were analyzed using three visual attention models. The results showed that the image before filtering and after filtering showed visually significant results.
乳腺癌可以威胁妇女的健康,并成为死亡的原因。要降低乳腺癌死亡率,就必须及早发现其体征和症状。建立一个早期检测系统的关键步骤是分割乳房超声图像(BUS)。分割的准确性直接关系到乳腺肿瘤定量分析和检测的有效性。然而,由于总线图像质量较浅,这种图像分割受到了约束。因此,有必要采取预处理步骤来改善图像。本研究的目的是比较各种滤波技术在视觉注意模型下的BUS分割效率。本研究共测试了12种滤波器,包括Mean、Median、Bilateral、Fast nonlinear、Lee、Lee-enhance、Frost、Kuan、Gamma、Wiener、Speckle Reduction Anisotropic Diffusion Filter (SRAD)和Detail Preserved Anisotropic Diffusion Filter (DPAD)。分割过程使用基于卷积神经网络(CNN)的网络架构,即16层视觉几何组架构(VGG-16)。使用三种视觉注意模型对分割结果进行分析。结果表明,滤波前和滤波后的图像在视觉上效果显著。
{"title":"The Impact of Filtering for Breast Ultrasound Segmentation using A Visual Attention Model","authors":"D. N. K. Hardani, H. A. Nugroho, I. Ardiyanto","doi":"10.1109/IBIOMED56408.2022.9988361","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9988361","url":null,"abstract":"Breast cancer can threaten women's health and become a cause of death. Reducing mortality from breast cancer necessitates early recognition of its signs and symptoms. An essential step in building an early detection system is to segment the breast ultrasound image (BUS). The accuracy of segmentation has a direct bearing on the effectiveness of quantitative analysis and the detection of breast tumor. However, this image segmentation becomes constrained because the BUS image has a shallow quality. Therefore, it is necessary to take preprocessing steps to improve the image. This study aims to compare the efficiency of various filtering techniques for BUS segmentation with the visual attention model. There are 12 filters tested in this study, including Mean, Median, Bilateral, Fast nonlinear, Lee, Lee-enhance, Frost, Kuan, Gamma, Wiener, Speckle Reduction Anisotropic Diffusion Filter (SRAD), and Detail Preserved Anisotropic Diffusion Filter (DPAD). The segmentation process uses a Convolutional Neural Network (CNN) based network architecture, namely Visual Geometry Group architecture with 16 layers (VGG-16). The segmentation results were analyzed using three visual attention models. The results showed that the image before filtering and after filtering showed visually significant results.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129699717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optic Disc Segmentation Based on Mask R-CNN in Retinal Fundus Images 基于掩模R-CNN的视网膜眼底图像视盘分割
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9987756
I. G. Pande Darma Suardika, I. M. Dendi Maysanjaya, Made Windu Antara Kesiman
An optic disc is an object on the retina of the eye that has the characteristics of being brightly colored and round. Optical disc segmentation is the most commonstep taken before processing a retinal fundus image. The bright characteristics of the optic disc often interfere withthe detection of other objects in the retinal fundus image. Therefore, the optic disc is the first step before processingthe fundus image of the retina. With the help of digital image processing will help in the removal of the optic discon the fundus image of the retina. Many methods can be used in optical disc segmentation, one of which is the deep learning method. The deep learning method chosen is Mask R-CNN to produce a mask from the results of object detection on the retinal fundus image. There are 3 stages in the segmentation process using the Mask R-CNN. First, the data used in the training process will be labeled. thereis 1 label given, namely optic disc. Then the model is trained using the restnet50 backbone architecture and finally, the model will be evaluated. To evaluate the results obtained from the two methods, it uses Intersection over Union (IoU) by comparing directly the results of prediction and ground truth. The data used is an IDRiD dataset containing retinal fundus images taken from eye clinics across India. As the result, Mask R-CNN can segment the optical disc with an IoU value of 0.843. it is hoped that the results of this research can help the process in processing retinal fundus images in the future.
视盘是眼睛视网膜上的一个物体,具有明亮的颜色和圆形的特征。在处理视网膜眼底图像之前,光盘分割是最常见的步骤。视盘的明亮特性常常干扰眼底图像中其他物体的检测。因此,视盘是处理视网膜眼底图像之前的第一步。借助数字图像处理将有助于去除视差的眼底图像的视网膜。用于光盘分割的方法有很多,深度学习方法是其中的一种。我们选择的深度学习方法是Mask R-CNN,从视网膜眼底图像上的目标检测结果产生一个Mask。使用掩码R-CNN的分割过程有3个阶段。首先,训练过程中使用的数据将被标记。给出了1个标签,即视盘。然后使用restnet50主干架构对模型进行训练,最后对模型进行评估。为了评价两种方法得到的结果,通过直接比较预测结果和地面真值,使用了交集优于联合(Intersection over Union, IoU)。使用的数据是一个IDRiD数据集,其中包含从印度各地眼科诊所拍摄的视网膜眼底图像。因此,Mask R-CNN可以分割光盘,IoU值为0.843。希望本研究结果能对未来视网膜眼底图像的处理过程有所帮助。
{"title":"Optic Disc Segmentation Based on Mask R-CNN in Retinal Fundus Images","authors":"I. G. Pande Darma Suardika, I. M. Dendi Maysanjaya, Made Windu Antara Kesiman","doi":"10.1109/IBIOMED56408.2022.9987756","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9987756","url":null,"abstract":"An optic disc is an object on the retina of the eye that has the characteristics of being brightly colored and round. Optical disc segmentation is the most commonstep taken before processing a retinal fundus image. The bright characteristics of the optic disc often interfere withthe detection of other objects in the retinal fundus image. Therefore, the optic disc is the first step before processingthe fundus image of the retina. With the help of digital image processing will help in the removal of the optic discon the fundus image of the retina. Many methods can be used in optical disc segmentation, one of which is the deep learning method. The deep learning method chosen is Mask R-CNN to produce a mask from the results of object detection on the retinal fundus image. There are 3 stages in the segmentation process using the Mask R-CNN. First, the data used in the training process will be labeled. thereis 1 label given, namely optic disc. Then the model is trained using the restnet50 backbone architecture and finally, the model will be evaluated. To evaluate the results obtained from the two methods, it uses Intersection over Union (IoU) by comparing directly the results of prediction and ground truth. The data used is an IDRiD dataset containing retinal fundus images taken from eye clinics across India. As the result, Mask R-CNN can segment the optical disc with an IoU value of 0.843. it is hoped that the results of this research can help the process in processing retinal fundus images in the future.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"17 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132835716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Development of Gamification Design on Heart Anatomy Learning Media 心脏解剖学习媒体游戏化设计的发展
Pub Date : 2022-10-18 DOI: 10.1109/IBIOMED56408.2022.9987817
Zahra'ul Athiyah, A. E. Permanasari, S. Wibirama
Cadaver as a learning media for manual anatomy is very important for students because it is believed to give a different impression from learning with other media. However, we still need learning support media considering the limitations of the use of corpses. It is necessary to have the role of digital media that students can use wherever they are without reducing the value or content of the material they usually learn manually. The author proposes a learning model with a mobile app-based gamification approach that can attract user's attention, increase learning motivation, and increase student interaction in the learning process. We build a Heart mobile application that is equipped with materials and quizzes. This paper presents the gamification design for the Heart application using a tetrad approach. The anatomy learning media uses 3D visualization and Augmented Reality, and the design of the learning flow uses the gamification method. The Heart program offers a simple and fun learning path. The System Usability Scale (SUS) results show a score of 72.25 and are included in the Good Usability category. Finally, gamification strategies are expected to improve users' efficacy and learning outcomes.
尸体作为手工解剖的学习媒介对学生来说是非常重要的,因为它被认为是与其他媒介学习不同的印象。然而,考虑到尸体使用的局限性,我们仍然需要学习支持媒体。有必要发挥数字媒体的作用,使学生无论在哪里都可以使用,而不会降低他们通常手工学习的材料的价值或内容。作者提出了一种基于移动应用的游戏化学习模式,可以吸引用户的注意力,增加学习动机,增加学生在学习过程中的交互性。我们建立了一个心脏移动应用程序,配备了材料和测验。本文介绍了使用四分体方法的心脏应用程序的游戏化设计。解剖学习媒体采用三维可视化和增强现实技术,学习流程设计采用游戏化方法。心脏计划提供了一个简单而有趣的学习途径。系统可用性量表(SUS)的结果显示72.25分,并包括在良好的可用性类别。最后,游戏化策略有望提高用户的效能和学习成果。
{"title":"Development of Gamification Design on Heart Anatomy Learning Media","authors":"Zahra'ul Athiyah, A. E. Permanasari, S. Wibirama","doi":"10.1109/IBIOMED56408.2022.9987817","DOIUrl":"https://doi.org/10.1109/IBIOMED56408.2022.9987817","url":null,"abstract":"Cadaver as a learning media for manual anatomy is very important for students because it is believed to give a different impression from learning with other media. However, we still need learning support media considering the limitations of the use of corpses. It is necessary to have the role of digital media that students can use wherever they are without reducing the value or content of the material they usually learn manually. The author proposes a learning model with a mobile app-based gamification approach that can attract user's attention, increase learning motivation, and increase student interaction in the learning process. We build a Heart mobile application that is equipped with materials and quizzes. This paper presents the gamification design for the Heart application using a tetrad approach. The anatomy learning media uses 3D visualization and Augmented Reality, and the design of the learning flow uses the gamification method. The Heart program offers a simple and fun learning path. The System Usability Scale (SUS) results show a score of 72.25 and are included in the Good Usability category. Finally, gamification strategies are expected to improve users' efficacy and learning outcomes.","PeriodicalId":250112,"journal":{"name":"2022 4th International Conference on Biomedical Engineering (IBIOMED)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130800265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 4th International Conference on Biomedical Engineering (IBIOMED)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1