首页 > 最新文献

Journal of Medical and Biological Engineering最新文献

英文 中文
1D Convolutional Neural Network Impact on Heart Rate Metrics for ECG and BCG Signals 一维卷积神经网络对心电图和 BCG 信号心率指标的影响
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-05 DOI: 10.1007/s40846-024-00872-w
Juan Pablo Moreno, Miguel A. Sepúlveda, Esteban J. Pino

Purpose

The presence of motion artifacts (MA) in cardiac signals negatively impacts the reliability of higher-level information such as the Heart Rate (HR), and therefore the correct diagnosis of pathologies. This paper proposes an MA detection method, based on One-Dimensional Convolutional Neural Networks (1D CNN), to label noisy zones of signals as unreliable, and subsequently avoid them for metric calculations.

Methods

To validate the concept, we first design a CNN to detect MAs in electrocardiogram (ECG) recordings from MIT–BIH Arrhythmia and Noise Stress Test Databases. This network extracts features from 1 s data segments, and then classifies them as clean or noisy. Also, we then train a tuned version of the model with semi-synthetic ballistocardiogram (BCG) signals.

Results

The classification in ECG achieves an accuracy of 95.9% and the BCG classification obtains an accuracy of 91.1%. Both classifiers are incorporated into beat detection systems, which produce an increase in the sensitivity of the detection algorithms from 75 to 98.5% in the ECG case, and from 72.1 to 94.5% in the case of BCG, for signals contaminated at 0 dB of SNR.

Conclusion

We propose that this method will improve accuracy of any processing algorithm on BCG signals by identifying useful segments where a high accuracy can be achieved.

目的心脏信号中运动伪影(MA)的存在会对心率(HR)等高层次信息的可靠性产生负面影响,从而影响病理诊断的正确性。本文提出了一种基于一维卷积神经网络(1D CNN)的运动伪影检测方法,该方法可将信号中的噪声区域标记为不可靠区域,从而在度量计算中避免使用这些区域。该网络从 1 秒钟的数据片段中提取特征,然后将其分类为干净或噪声。此外,我们还利用半合成的球心电图(BCG)信号对该模型的调整版本进行了训练。结果心电图分类的准确率为 95.9%,BCG 分类的准确率为 91.1%。这两种分类器都被整合到了心搏检测系统中,对于信噪比为 0 dB 的污染信号,心电图检测算法的灵敏度从 75% 提高到了 98.5%,而 BCG 的灵敏度则从 72.1% 提高到了 94.5%。
{"title":"1D Convolutional Neural Network Impact on Heart Rate Metrics for ECG and BCG Signals","authors":"Juan Pablo Moreno, Miguel A. Sepúlveda, Esteban J. Pino","doi":"10.1007/s40846-024-00872-w","DOIUrl":"https://doi.org/10.1007/s40846-024-00872-w","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>The presence of motion artifacts (MA) in cardiac signals negatively impacts the reliability of higher-level information such as the Heart Rate (HR), and therefore the correct diagnosis of pathologies. This paper proposes an MA detection method, based on One-Dimensional Convolutional Neural Networks (1D CNN), to label noisy zones of signals as unreliable, and subsequently avoid them for metric calculations.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>To validate the concept, we first design a CNN to detect MAs in electrocardiogram (ECG) recordings from MIT–BIH Arrhythmia and Noise Stress Test Databases. This network extracts features from 1 s data segments, and then classifies them as clean or noisy. Also, we then train a tuned version of the model with semi-synthetic ballistocardiogram (BCG) signals.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The classification in ECG achieves an accuracy of 95.9% and the BCG classification obtains an accuracy of 91.1%. Both classifiers are incorporated into beat detection systems, which produce an increase in the sensitivity of the detection algorithms from 75 to 98.5% in the ECG case, and from 72.1 to 94.5% in the case of BCG, for signals contaminated at 0 dB of SNR.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>We propose that this method will improve accuracy of any processing algorithm on BCG signals by identifying useful segments where a high accuracy can be achieved.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141257976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Practical Computer Aided Diagnosis System for Breast Ultrasound Classifying Lesions into the ACR BI-RADS Assessment 实用的乳腺超声计算机辅助诊断系统将病变归入 ACR BI-RADS 评估范围
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-06-01 DOI: 10.1007/s40846-024-00869-5
Hsin-Ya Su, Chung-Yueh Lien, Pai-Jung Huang, Woei-Chyn Chu

Purpose

In this paper, we propose an open-source deep learning-based computer-aided diagnosis system for breast ultrasound images based on the Breast Imaging Reporting and Data System (BI-RADS).

Methods

Our dataset with 8,026 region-of-interest images preprocessed with ten times data augmentation. We compared the classification performance of VGG-16, ResNet-50, and DenseNet-121 and two ensemble methods integrated the single models.

Results

The ensemble model achieved the best performance, with 81.8% accuracy. Our results show that our model is performant enough to classify Category 2 and Category 4/5 lesions, and data augmentation can improve the classification performance of Category 3.

Conclusion

Our main contribution is to classify breast ultrasound lesions into BI-RADS assessment classes that place more emphasis on adhering to the BI-RADS medical suggestions including recommending routine follow-up tracing (Category 2), short-term follow-up tracing (Category 3) and biopsies (Category 4/5).

目的 本文基于乳腺成像报告和数据系统(BI-RADS),提出了一种基于开源深度学习的乳腺超声图像计算机辅助诊断系统。我们比较了 VGG-16、ResNet-50 和 DenseNet-121 的分类性能,以及整合了单一模型的两种集合方法。结论我们的主要贡献是将乳腺超声病变分为 BI-RADS 评估等级,这些等级更强调遵守 BI-RADS 的医疗建议,包括建议常规随访追踪(第 2 类)、短期随访追踪(第 3 类)和活检(第 4/5 类)。
{"title":"A Practical Computer Aided Diagnosis System for Breast Ultrasound Classifying Lesions into the ACR BI-RADS Assessment","authors":"Hsin-Ya Su, Chung-Yueh Lien, Pai-Jung Huang, Woei-Chyn Chu","doi":"10.1007/s40846-024-00869-5","DOIUrl":"https://doi.org/10.1007/s40846-024-00869-5","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>In this paper, we propose an open-source deep learning-based computer-aided diagnosis system for breast ultrasound images based on the Breast Imaging Reporting and Data System (BI-RADS).</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Our dataset with 8,026 region-of-interest images preprocessed with ten times data augmentation. We compared the classification performance of VGG-16, ResNet-50, and DenseNet-121 and two ensemble methods integrated the single models.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The ensemble model achieved the best performance, with 81.8% accuracy. Our results show that our model is performant enough to classify Category 2 and Category 4/5 lesions, and data augmentation can improve the classification performance of Category 3.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Our main contribution is to classify breast ultrasound lesions into BI-RADS assessment classes that place more emphasis on adhering to the BI-RADS medical suggestions including recommending routine follow-up tracing (Category 2), short-term follow-up tracing (Category 3) and biopsies (Category 4/5).</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"19 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Approach to Segment Nuclei and Cytoplasm in Lung Cancer Brightfield Images Using Hybrid Swin-Unet Transformer 利用混合 Swin-Unet 变换器分割肺癌明视野图像中的细胞核和细胞质的方法
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-29 DOI: 10.1007/s40846-024-00873-9
Sreelekshmi Palliyil Sreekumar, Rohini Palanisamy, Ramakrishnan Swaminathan

Purpose

Segmentation of nuclei and cytoplasm in cellular images is essential for estimating the prognosis of lung cancer disease. The detection of these organelles in the unstained brightfield microscopic images is challenging due to poor contrast and lack of separation of structures with irregular morphology. This work aims to carry out semantic segmentation of nuclei and cytoplasm in lung cancer brightfield images using the Swin-Unet Transformer.

Methods

For this study, publicly available brightfield images of lung cancer cells are pre-processed and fed to the Swin-Unet for semantic segmentation. Model specific hyperparameters are identified after detailed analysis and the segmentation performance is validated using standard evaluation metrics.

Results

The hyperparameter analysis provides the selection of optimum parameters as focal loss, learning rate of 0.0001, Adam optimizer, and Swin Transformer patch size of 4. The obtained results show that with these parameters, the Swin-Unet Transformer accurately segmented the nuclei and cytoplasm in the brightfield images with pixel-F1 scores of 90.71% and 79.29% respectively.

Conclusion

It is observed that the model could identify nuclei and cytoplasm with varied morphologies. The detection of cytoplasm with weak and subtle edge details indicates the effectiveness of shifted window based self attention mechanism of Swin-Unet in capturing the global and long distance pixel interactions in the brightfield images. Thus, the adopted methodology in this study can be employed for the precise segmentation of nuclei and cytoplasm for assessing the malignancy of lung cancer disease.

目的细胞图像中细胞核和细胞质的分离对于估计肺癌疾病的预后至关重要。由于对比度差、形态不规则的结构无法分离,在未染色的明视野显微图像中检测这些细胞器具有挑战性。本研究旨在使用 Swin-Unet Transformer 对肺癌明视野图像中的细胞核和细胞质进行语义分割。经过详细分析,确定了特定模型的超参数,并使用标准评估指标验证了分割性能。结果超参数分析提供了最佳参数选择,如焦点损失、学习率 0.0001、Adam 优化器和 Swin 变换器补丁大小为 4。结果表明,在这些参数的作用下,Swin-Unet 变换器准确地分割了明视野图像中的细胞核和细胞质,像素-F1 分数分别为 90.71% 和 79.29%。对具有微弱和细微边缘细节的细胞质的检测表明,Swin-Unet 基于移位窗口的自我注意机制在捕捉明视野图像中的全局和长距离像素相互作用方面非常有效。因此,本研究采用的方法可用于精确分割细胞核和细胞质,以评估肺癌疾病的恶性程度。
{"title":"An Approach to Segment Nuclei and Cytoplasm in Lung Cancer Brightfield Images Using Hybrid Swin-Unet Transformer","authors":"Sreelekshmi Palliyil Sreekumar, Rohini Palanisamy, Ramakrishnan Swaminathan","doi":"10.1007/s40846-024-00873-9","DOIUrl":"https://doi.org/10.1007/s40846-024-00873-9","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Segmentation of nuclei and cytoplasm in cellular images is essential for estimating the prognosis of lung cancer disease. The detection of these organelles in the unstained brightfield microscopic images is challenging due to poor contrast and lack of separation of structures with irregular morphology. This work aims to carry out semantic segmentation of nuclei and cytoplasm in lung cancer brightfield images using the Swin-Unet Transformer.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>For this study, publicly available brightfield images of lung cancer cells are pre-processed and fed to the Swin-Unet for semantic segmentation. Model specific hyperparameters are identified after detailed analysis and the segmentation performance is validated using standard evaluation metrics.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The hyperparameter analysis provides the selection of optimum parameters as focal loss, learning rate of 0.0001, Adam optimizer, and Swin Transformer patch size of 4. The obtained results show that with these parameters, the Swin-Unet Transformer accurately segmented the nuclei and cytoplasm in the brightfield images with pixel-F1 scores of 90.71% and 79.29% respectively.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>It is observed that the model could identify nuclei and cytoplasm with varied morphologies. The detection of cytoplasm with weak and subtle edge details indicates the effectiveness of shifted window based self attention mechanism of Swin-Unet in capturing the global and long distance pixel interactions in the brightfield images. Thus, the adopted methodology in this study can be employed for the precise segmentation of nuclei and cytoplasm for assessing the malignancy of lung cancer disease.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"23 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biomechanical Finite Element Analysis of Bone Tissues with Different Scales in the Bone Regeneration Area after Scoliosis Surgery 脊柱侧弯手术后骨再生区不同尺度骨组织的生物力学有限元分析
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-28 DOI: 10.1007/s40846-024-00870-y
Xiaozheng Yang, Rongchang Fu, Pengju Li, Kun Wang, Huiran Chen, Fu

Purpose

This paper aims to analyze the influence of mechanical force on bone regeneration from macro and micro perspectives, to investigate the mechanical response of bone tissues at various scales after operation and provide a theoretical basis for further research and clinical practice.

Methods

An effective postoperative lumbar model was constructed, and the bone regeneration area was established at the osteotomy. The area was divided into five stages, from 10 MPa to 100 MPa. Then, the osteon and bone lacuna-osteocyte models were constructed, and their biomechanical characteristics under different working conditions were studied.

Results

From the first stage to the fifth stage, the macroscopic bone tissue larger than 3000 µε decreased by about 40%, the maximum stress ratio n approximates k (EO/ET) of macro- and micro-bone tissues, and the area of osteocytes less than 3000 µε increased by about 45%. In the second stage, 41.7% of the bone cells have a strain of 1000 µε ∼ 3000 µε, and this percentage increases to 66.7%∼72.2% after the fourth stage.

Conclusion

The macro-meso stress ratio is related to the tissue strength around the osteon. In the first stage, the patient should lie flat and rest, instead of standing upright. At the beginning of the fourth stage, the rate of bone regeneration is much faster than the rate of lesions, making it suitable for upright recovery, and the recovery speed increases.

目的 本文旨在从宏观和微观角度分析机械力对骨再生的影响,探讨术后骨组织在不同尺度上的机械反应,为进一步的研究和临床实践提供理论依据。方法 构建有效的术后腰椎模型,在截骨处建立骨再生区。该区域被分为五个阶段,从 10 兆帕到 100 兆帕。结果从第一阶段到第五阶段,大于 3000 µε 的宏观骨组织减少了约 40%,宏观和微观骨组织的最大应力比 n 接近 k(EO/ET),小于 3000 µε 的骨细胞面积增加了约 45%。在第二阶段,41.7% 的骨细胞应变为 1000 µε ∼ 3000 µε,第四阶段后这一比例增至 66.7% ∼ 72.2%。在第一阶段,患者应平躺休息,而不是直立。第四阶段开始时,骨再生速度远快于病变速度,适合直立恢复,恢复速度加快。
{"title":"Biomechanical Finite Element Analysis of Bone Tissues with Different Scales in the Bone Regeneration Area after Scoliosis Surgery","authors":"Xiaozheng Yang, Rongchang Fu, Pengju Li, Kun Wang, Huiran Chen, Fu","doi":"10.1007/s40846-024-00870-y","DOIUrl":"https://doi.org/10.1007/s40846-024-00870-y","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>This paper aims to analyze the influence of mechanical force on bone regeneration from macro and micro perspectives, to investigate the mechanical response of bone tissues at various scales after operation and provide a theoretical basis for further research and clinical practice.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>An effective postoperative lumbar model was constructed, and the bone regeneration area was established at the osteotomy. The area was divided into five stages, from 10 MPa to 100 MPa. Then, the osteon and bone lacuna-osteocyte models were constructed, and their biomechanical characteristics under different working conditions were studied.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>From the first stage to the fifth stage, the macroscopic bone tissue larger than 3000 µε decreased by about 40%, the maximum stress ratio n approximates k (E<sub>O</sub>/E<sub>T</sub>) of macro- and micro-bone tissues, and the area of osteocytes less than 3000 µε increased by about 45%. In the second stage, 41.7% of the bone cells have a strain of 1000 µε ∼ 3000 µε, and this percentage increases to 66.7%∼72.2% after the fourth stage.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The macro-meso stress ratio is related to the tissue strength around the osteon. In the first stage, the patient should lie flat and rest, instead of standing upright. At the beginning of the fourth stage, the rate of bone regeneration is much faster than the rate of lesions, making it suitable for upright recovery, and the recovery speed increases.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"58 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141166997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preliminary Results: Comparison of Convolutional Neural Network Architectures as an Auxiliary Clinical Tool Applied to Screening Mammography in Mexican Women 初步结果:将卷积神经网络架构作为辅助临床工具应用于墨西哥妇女乳房 X 线照相术筛查的比较
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-09 DOI: 10.1007/s40846-024-00868-6
Samara Acosta-Jiménez, Susana Aideé González-Chávez, Javier Camarillo-Cisneros, César Pacheco-Tena, Mirelle Barcenas-López, Laura Esther González-Lozada, Claudia Hernández-Orozco, Jesús Humberto Burboa-Delgado, Rosa Elena Ochoa-Albíztegui

Purpose

Mammography is the modality of choice for the early detection of breast cancer. Deep learning, using convolutional neural networks (CNNs) specifically, have achieved extraordinary results in the classification of diseases, including breast cancer, on imaging. The images used to train a CNN varies based on several factors, such as imaging technique, imaging equipment, and study population; these factors significantly affect the accuracy of the CNN models. The aim of this study was to develop a novel CNN for the classification of mammograms as benign or malignant and to compare its utility to that of popular pre-trained CNNs in the literature using transfer learning. All CNNs were trained to detect breast cancer on mammograms using mammograms from a created database of Mexican women (MAMMOMX-PABIOM) and from a public database of UK women (MIAS).

Methods

A database (MAMMOMX-PABIOM) was built comprising 1,070 mammography images of 235 Mexican patients from 4 hospitals in Mexico. The study also used mammographic images from the Mammographic Image Analysis Society (MIAS) public database, which comprises mammography images from the UK National Breast Screening Programme. A novel CNN was developed and trained based on different configurations of training data; the accuracy of the models resulting from the novel CNN were compared with models resulting from more advanced pre-trained CNNs (DenseNet121, MobileNetV2, ResNet 50, VGG16) which were built using transfer learning.

Results

Of the models resulting from pre-trained CNNs using transfer learning, the model based on MobileNetV2 and training data from the MAMMOMX-PABIOM database achieved the highest validation accuracy of 70.10%. In comparison, the novel CNN, when trained with the data configuration A6, which comprises data from both the MAMMOMX-PABIOM database and the MIAS database, produced a much higher accuracy of 99.14%.

Conclusion

Although transfer learning is a widely used technique when training, data is scarce. The novel CNN produced much higher accuracy values across all configurations of training data compared to the accuracy values of pre-trained CNNs using transfer learning. In addition, this study addresses the gap in that neither a national database of mammograms of Mexican women exists, nor a deep learning tool for the classification of mammograms as benign or malignant that is focused on this population.

目的乳腺成像是早期检测乳腺癌的首选方式。深度学习,特别是使用卷积神经网络(CNN),在对包括乳腺癌在内的疾病进行成像分类方面取得了非凡的成果。用于训练 CNN 的图像因多种因素而异,如成像技术、成像设备和研究人群;这些因素极大地影响了 CNN 模型的准确性。本研究的目的是开发一种新型 CNN,用于将乳房 X 线照片分类为良性或恶性,并利用迁移学习将其效用与文献中流行的预训练 CNN 进行比较。所有 CNN 都经过了训练,以使用创建的墨西哥妇女数据库(MAMMOMX-PABIOM)和英国妇女公共数据库(MIAS)中的乳房 X 射线照片检测乳房 X 射线照片上的乳腺癌。方法建立了一个数据库(MAMMOMX-PABIOM),其中包括来自墨西哥 4 家医院的 235 名墨西哥患者的 1070 张乳房 X 射线照片。研究还使用了乳腺图像分析协会(MIAS)公共数据库中的乳腺图像,该数据库包括英国国家乳腺筛查计划中的乳腺图像。研究人员开发了一种新型 CNN,并根据不同的训练数据配置对其进行了训练;将新型 CNN 生成的模型的准确率与使用迁移学习建立的更先进的预训练 CNN(DenseNet121、MobileNetV2、ResNet 50 和 VGG16)生成的模型进行了比较。相比之下,新型 CNN 在使用数据配置 A6(包括来自 MAMMOMX-PABIOM 数据库和 MIAS 数据库的数据)进行训练时,获得了 99.14% 的更高准确率。与使用迁移学习进行预训练的 CNN 的准确率相比,新型 CNN 在所有配置的训练数据中都获得了更高的准确率。此外,这项研究还填补了一个空白,即既没有墨西哥妇女乳房 X 光照片的国家数据库,也没有针对这一人群的乳房 X 光照片良性或恶性分类的深度学习工具。
{"title":"Preliminary Results: Comparison of Convolutional Neural Network Architectures as an Auxiliary Clinical Tool Applied to Screening Mammography in Mexican Women","authors":"Samara Acosta-Jiménez, Susana Aideé González-Chávez, Javier Camarillo-Cisneros, César Pacheco-Tena, Mirelle Barcenas-López, Laura Esther González-Lozada, Claudia Hernández-Orozco, Jesús Humberto Burboa-Delgado, Rosa Elena Ochoa-Albíztegui","doi":"10.1007/s40846-024-00868-6","DOIUrl":"https://doi.org/10.1007/s40846-024-00868-6","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Mammography is the modality of choice for the early detection of breast cancer. Deep learning, using convolutional neural networks (CNNs) specifically, have achieved extraordinary results in the classification of diseases, including breast cancer, on imaging. The images used to train a CNN varies based on several factors, such as imaging technique, imaging equipment, and study population; these factors significantly affect the accuracy of the CNN models. The aim of this study was to develop a novel CNN for the classification of mammograms as benign or malignant and to compare its utility to that of popular pre-trained CNNs in the literature using transfer learning. All CNNs were trained to detect breast cancer on mammograms using mammograms from a created database of Mexican women (MAMMOMX-PABIOM) and from a public database of UK women (MIAS).</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>A database (MAMMOMX-PABIOM) was built comprising 1,070 mammography images of 235 Mexican patients from 4 hospitals in Mexico. The study also used mammographic images from the Mammographic Image Analysis Society (MIAS) public database, which comprises mammography images from the UK National Breast Screening Programme. A novel CNN was developed and trained based on different configurations of training data; the accuracy of the models resulting from the novel CNN were compared with models resulting from more advanced pre-trained CNNs (DenseNet121, MobileNetV2, ResNet 50, VGG16) which were built using transfer learning.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Of the models resulting from pre-trained CNNs using transfer learning, the model based on MobileNetV2 and training data from the MAMMOMX-PABIOM database achieved the highest validation accuracy of 70.10%. In comparison, the novel CNN, when trained with the data configuration A6, which comprises data from both the MAMMOMX-PABIOM database and the MIAS database, produced a much higher accuracy of 99.14%.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>Although transfer learning is a widely used technique when training, data is scarce. The novel CNN produced much higher accuracy values across all configurations of training data compared to the accuracy values of pre-trained CNNs using transfer learning. In addition, this study addresses the gap in that neither a national database of mammograms of Mexican women exists, nor a deep learning tool for the classification of mammograms as benign or malignant that is focused on this population.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"37 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140941038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Establishment of Three Gene Prognostic Markers in Pancreatic Ductal Adenocarcinoma Using Machine Learning Approach 利用机器学习方法确定胰腺导管腺癌的三个基因预后标志物
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-09 DOI: 10.1007/s40846-024-00859-7
Pragya Pragya, Praveen Kumar Govarthan, Malay Nayak, Sudip Mukherjee, Jac Fredo Agastinose Ronickom

Purpose

Pancreatic ductal adenocarcinoma (PDAC) is the most prevalent form of pancreatic cancer, accounting for about 85% of all occurrences. It is highly challenging to treat PDAC because of its extreme aggressiveness and lack of therapeutic options. Identifying new gene markers can help in the design of novel targeted therapeutics.

Methods

In this study, we identified three different gene prognostic markers in PDAC using a machine learning approach. Initially, the differential expression genes (DEGs) profile of accession number GSE183795 was downloaded from the gene expression omnibus database of the National Center for Biotechnology Information (NCBI), which consists of the expression profile of the 244 patients with PDAC (139 pancreatic tumors, 102 adjacent non-tumors and 3 normal). Then, the expression dataset was preprocessed using different packages of R programming, such as GEOquery, Affy, and Limma. Further, DEGs were identified by the machine learning algorithms, including random forest (RF) and extreme gradient boost (XGboost). Finally, survival analysis was performed to identify DEGs using GEPIA software (TCGA database).

Results

Our results revealed that 6 out of 25 DEGs (ERCC3, ACY3, ATP2A3, MW-TW1879, MW-TW3829, and ZBTB7A) identified by RF and XGBoost algorithm were the same, indicating their feature importance. Moreover, three genes, including ATP2A3 (p = 0.029), NRL (p = 0.012), and FBXO45 (p = 0.013), were statistically significant when tested for survival analysis and may be utilized as the prognostic marker genes for PDAC.

Conclusion

These findings provide valuable insights into the molecular characteristics of PDAC and can potentially guide future research on cancer theranostics interventions for this devastating disease.

目的 胰腺导管腺癌(PDAC)是最常见的胰腺癌,约占胰腺癌发病总数的 85%。由于 PDAC 具有极强的侵袭性,且缺乏治疗方案,因此治疗 PDAC 极具挑战性。方法在这项研究中,我们利用机器学习方法确定了 PDAC 中三种不同的基因预后标记。首先,我们从美国国家生物技术信息中心(NCBI)的基因表达总括数据库中下载了登录号为GSE183795的差异表达基因(DEGs)图谱,其中包括244例PDAC患者(139例胰腺肿瘤、102例邻近非肿瘤和3例正常人)的表达图谱。然后,使用 GEOquery、Affy 和 Limma 等不同的 R 程序包对表达数据集进行预处理。然后,使用随机森林(RF)和极端梯度提升(XGboost)等机器学习算法识别 DEGs。结果表明,RF和XGBoost算法识别出的25个DEGs中有6个(ERCC3、ACY3、ATP2A3、MW-TW1879、MW-TW3829和ZBTB7A)是相同的,这表明了它们的特征重要性。此外,包括 ATP2A3(p = 0.029)、NRL(p = 0.012)和 FBXO45(p = 0.013)在内的三个基因在进行生存分析时具有统计学意义,可用作 PDAC 的预后标记基因。
{"title":"Establishment of Three Gene Prognostic Markers in Pancreatic Ductal Adenocarcinoma Using Machine Learning Approach","authors":"Pragya Pragya, Praveen Kumar Govarthan, Malay Nayak, Sudip Mukherjee, Jac Fredo Agastinose Ronickom","doi":"10.1007/s40846-024-00859-7","DOIUrl":"https://doi.org/10.1007/s40846-024-00859-7","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Pancreatic ductal adenocarcinoma (PDAC) is the most prevalent form of pancreatic cancer, accounting for about 85% of all occurrences. It is highly challenging to treat PDAC because of its extreme aggressiveness and lack of therapeutic options. Identifying new gene markers can help in the design of novel targeted therapeutics.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>In this study, we identified three different gene prognostic markers in PDAC using a machine learning approach. Initially, the differential expression genes (DEGs) profile of accession number GSE183795 was downloaded from the gene expression omnibus database of the National Center for Biotechnology Information (NCBI), which consists of the expression profile of the 244 patients with PDAC (139 pancreatic tumors, 102 adjacent non-tumors and 3 normal). Then, the expression dataset was preprocessed using different packages of R programming, such as GEOquery, Affy, and Limma. Further, DEGs were identified by the machine learning algorithms, including random forest (RF) and extreme gradient boost (XGboost). Finally, survival analysis was performed to identify DEGs using GEPIA software (TCGA database).</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Our results revealed that 6 out of 25 DEGs (ERCC3, ACY3, ATP2A3, MW-TW1879, MW-TW3829, and ZBTB7A) identified by RF and XGBoost algorithm were the same, indicating their feature importance. Moreover, three genes, including ATP2A3 (<i>p</i> = 0.029), NRL (<i>p</i> = 0.012), and FBXO45 (<i>p</i> = 0.013), were statistically significant when tested for survival analysis and may be utilized as the prognostic marker genes for PDAC.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>These findings provide valuable insights into the molecular characteristics of PDAC and can potentially guide future research on cancer theranostics interventions for this devastating disease.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"77 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Review of Brain Tumor Segmentation Using MRIs from 2019 to 2023 (Statistical Information, Key Achievements, and Limitations) 2019 至 2023 年使用核磁共振成像进行脑肿瘤分割的回顾(统计信息、主要成就和局限性)
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-04 DOI: 10.1007/s40846-024-00860-0
Yasaman Zakeri, Babak Karasfi, Afsaneh Jalalian

Purpose

A brain tumor is defined as any group of atypical cells occupying space in the brain. There are more than 120 types of them. MRI scans are used for brain tumor diagnosis since they are more detailed and three-dimensional. Accurate localization and segmentation of the tumor portion increase the patients' survival rates. To this end, we presented a systematic review of the latest development of brain tumor segmentation from MRI.

Methods

To find related articles, we searched the keywords like "brain tumors" and "segmentation by MRI”. The searches were performed on Elsevier, Springer, Wiley, and the leading conferences in the field of medical image processing. A total of 79 publications dedicated to tumor segmentation from years 2019 to 2023 were selected and categorized into four categories: non-Artificial Intelligence, machine learning, deep learning, and hybrid deep learning methods.

Results

We reviewed the trending techniques of tumor segmentation and provided a unified and integrated overview of the current state-of-the-art. The article dealt with providing the capabilities and shortcomings associated with each approach and the restrictions on using automated medical image segmentation techniques in clinical practice were determined.

Conclusion

In this study, the advancement of brain tumor segmentation by MRI is discussed, focusing more on recent articles. It identified the restrictions of the presented techniques regarding the four mentioned categories, which prevent them from being used in clinical practice. The literature will guide the researchers to become familiar with both the leading techniques and the potential problems that need to be addressed.

目的 脑肿瘤是指占据大脑空间的任何一组非典型细胞。脑瘤有 120 多种类型。磁共振成像扫描是诊断脑肿瘤的常用方法,因为它更详细、更立体。准确定位和分割肿瘤部分可提高患者的生存率。为了找到相关文章,我们搜索了 "脑肿瘤 "和 "核磁共振成像分割 "等关键词。搜索在 Elsevier、Springer、Wiley 和医学图像处理领域的主要会议上进行。结果我们回顾了肿瘤分割的趋势技术,并对当前最先进的技术进行了统一、综合的概述。文章论述了每种方法的相关能力和不足,并确定了在临床实践中使用自动医学影像分割技术的限制。研究发现了上述四类技术的局限性,这些局限性阻碍了这些技术在临床实践中的应用。这些文献将指导研究人员熟悉领先的技术和需要解决的潜在问题。
{"title":"A Review of Brain Tumor Segmentation Using MRIs from 2019 to 2023 (Statistical Information, Key Achievements, and Limitations)","authors":"Yasaman Zakeri, Babak Karasfi, Afsaneh Jalalian","doi":"10.1007/s40846-024-00860-0","DOIUrl":"https://doi.org/10.1007/s40846-024-00860-0","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>A brain tumor is defined as any group of atypical cells occupying space in the brain. There are more than 120 types of them. MRI scans are used for brain tumor diagnosis since they are more detailed and three-dimensional. Accurate localization and segmentation of the tumor portion increase the patients' survival rates. To this end, we presented a systematic review of the latest development of brain tumor segmentation from MRI.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>To find related articles, we searched the keywords like \"brain tumors\" and \"segmentation by MRI”. The searches were performed on Elsevier, Springer, Wiley, and the leading conferences in the field of medical image processing. A total of 79 publications dedicated to tumor segmentation from years 2019 to 2023 were selected and categorized into four categories: non-Artificial Intelligence, machine learning, deep learning, and hybrid deep learning methods.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>We reviewed the trending techniques of tumor segmentation and provided a unified and integrated overview of the current state-of-the-art. The article dealt with providing the capabilities and shortcomings associated with each approach and the restrictions on using automated medical image segmentation techniques in clinical practice were determined.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>In this study, the advancement of brain tumor segmentation by MRI is discussed, focusing more on recent articles. It identified the restrictions of the presented techniques regarding the four mentioned categories, which prevent them from being used in clinical practice. The literature will guide the researchers to become familiar with both the leading techniques and the potential problems that need to be addressed.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"15 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140883232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Effect of Coronary Atherosclerosis on Radial Pressure Wave: A Cross-Sectional Observational Clinical Study 冠状动脉粥样硬化对桡动脉压力波的影响:一项横断面临床观察研究
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-05-02 DOI: 10.1007/s40846-024-00867-7
Anooshirvan Mahdavian, Ali Fahim, Reza Arefizadeh, Seyyed Hossein Mousavi

Purpose

This research focuses on developing new ways to monitor coronary artery disease (CAD), the leading type of cardiovascular disease, which requires more straightforward, safer, and more continuous tracking methods along with angiography, the gold standard method. This need arises due to the high risk, cost, and the large number of people living with undiagnosed CAD. The study explores the use of the intrinsic frequency (IF) method, a promising but underutilized technique in the realm of CAD monitoring, to investigate its effectiveness in identifying CAD through the analysis of radial pressure wave patterns.

Method

The radial pressure waves, alongside major CAD risk factors (hypertension, diabetes, hyperlipidemia, smoking, family history, age, and sex) were analyzed in 100 patients undergoing angiography. The IF method was utilized to evaluate the dynamics of heart and arterial system function, focusing on specific IF indices that reflect vasculature health level.

Result

The results, validated through T-tests, reveal notable alterations in specific IF indices among CAD patients: ({{varvec{omega}}}_{2}) shows a significant increase with a mean of 82.5 bpm in CAD versus 41.56 bpm in non-CAD cases. Similarly, ({varvec{Delta}}{varvec{omega}}) displays a significant decrease with a mean of 15.73 bpm in CAD compared to 49.02 bpm in non-CAD individuals. Conversely, ({{varvec{omega}}}_{1}) demonstrates minimal variance between CAD and non-CAD groups.

Conclusion

This study underscores the potential of IF indices, particularly ({{varvec{omega}}}_{2}) and ({varvec{Delta}}{varvec{omega}}), as markers for severe CAD cases and strongly advocate for the integration of continuous monitoring strategies via modern technology in healthcare, such as smartwatches in CAD management.

Graphical Abstract

目的这项研究的重点是开发监测冠状动脉疾病(CAD)的新方法,冠状动脉疾病是心血管疾病的主要类型,它需要更直接、更安全、更连续的跟踪方法,以及血管造影术(金标准方法)。这种需求是由于高风险、高成本和大量未确诊的 CAD 患者而产生的。本研究探讨了固有频率(IF)方法的使用,该方法是一种很有前景但在 CAD 监测领域未得到充分利用的技术,通过分析桡动脉压力波模式,研究其在识别 CAD 方面的有效性。方法对 100 名接受血管造影术的患者的桡动脉压力波以及主要 CAD 危险因素(高血压、糖尿病、高脂血症、吸烟、家族史、年龄和性别)进行分析。结果经T检验验证,结果显示CAD患者的特定IF指数发生了显著变化:CAD患者的平均心率为82.5 bpm,而非CAD患者的平均心率为41.56 bpm。同样,({varvec{Delta}}{varvec{omega}})显示出明显的下降,CAD 患者的平均值为 15.73 bpm,而非 CAD 患者的平均值为 49.02 bpm。相反,({{varvec{omega}}}_{1}) 在 CAD 组和非 CAD 组之间的差异很小。结论这项研究强调了 IF 指数的潜力,尤其是 ({{varvec{omega}}}_{2}) 和 ({varvec{Delta}}{{varvec{omega}}) 作为严重 CAD 病例的标记物,并强烈主张通过现代医疗保健技术(如智能手表)在 CAD 管理中整合连续监测策略。
{"title":"The Effect of Coronary Atherosclerosis on Radial Pressure Wave: A Cross-Sectional Observational Clinical Study","authors":"Anooshirvan Mahdavian, Ali Fahim, Reza Arefizadeh, Seyyed Hossein Mousavi","doi":"10.1007/s40846-024-00867-7","DOIUrl":"https://doi.org/10.1007/s40846-024-00867-7","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>This research focuses on developing new ways to monitor coronary artery disease (CAD), the leading type of cardiovascular disease, which requires more straightforward, safer, and more continuous tracking methods along with angiography, the gold standard method. This need arises due to the high risk, cost, and the large number of people living with undiagnosed CAD. The study explores the use of the intrinsic frequency (IF) method, a promising but underutilized technique in the realm of CAD monitoring, to investigate its effectiveness in identifying CAD through the analysis of radial pressure wave patterns.</p><h3 data-test=\"abstract-sub-heading\">Method</h3><p>The radial pressure waves, alongside major CAD risk factors (hypertension, diabetes, hyperlipidemia, smoking, family history, age, and sex) were analyzed in 100 patients undergoing angiography. The IF method was utilized to evaluate the dynamics of heart and arterial system function, focusing on specific IF indices that reflect vasculature health level.</p><h3 data-test=\"abstract-sub-heading\">Result</h3><p>The results, validated through T-tests, reveal notable alterations in specific IF indices among CAD patients: <span>({{varvec{omega}}}_{2})</span> shows a significant increase with a mean of 82.5 bpm in CAD versus 41.56 bpm in non-CAD cases. Similarly, <span>({varvec{Delta}}{varvec{omega}})</span> displays a significant decrease with a mean of 15.73 bpm in CAD compared to 49.02 bpm in non-CAD individuals. Conversely, <span>({{varvec{omega}}}_{1})</span> demonstrates minimal variance between CAD and non-CAD groups.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>This study underscores the potential of IF indices, particularly <span>({{varvec{omega}}}_{2})</span> and <span>({varvec{Delta}}{varvec{omega}})</span>, as markers for severe CAD cases and strongly advocate for the integration of continuous monitoring strategies via modern technology in healthcare, such as smartwatches in CAD management.</p><h3 data-test=\"abstract-sub-heading\">Graphical Abstract</h3>\u0000","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"28 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140883301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Role of Artificial Intelligence in Medical Image Analysis: A Review of Current Trends and Future Directions 人工智能在医学图像分析中的作用:当前趋势与未来方向综述
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-16 DOI: 10.1007/s40846-024-00863-x
Xin Li, Lei Zhang, Jingsi Yang, Fei Teng

Purpose

This review offers insight into AI’s current and future contributions to medical image analysis. The article highlights the challenges associated with manual image interpretation and introduces AI methodologies, including machine learning and deep learning. It explores AI’s applications in image segmentation, classification, registration, and reconstruction across various modalities like X-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound.

Background

Medical image analysis is vital in modern healthcare, facilitating disease diagnosis, treatment, and monitoring. Integrating artificial intelligence (AI) techniques, particularly deep learning, has revolutionized this field.

Methods

Recent advancements are discussed, such as generative adversarial networks (GANs), transfer learning, and federated learning. The review assesses the advantages and limitations of AI in medical image analysis, underscoring the importance of interpretability, robustness, and generalizability in clinical practice. Ethical considerations related to data privacy, bias, and regulatory aspects are also examined.

Results

The article concludes by exploring future directions, including personalized medicine, multi-modal fusion, real-time analysis, and seamless integration with electronic health records (EHRs).

Conclusion

This comprehensive review delineates artificial intelligence’s current and prospective role in medical image analysis. With implications for researchers, clinicians, and policymakers, it underscores AI’s transformative potential in enhancing patient care.

目的 本综述深入探讨了人工智能目前和未来对医学图像分析的贡献。文章强调了与人工图像解读相关的挑战,并介绍了人工智能方法,包括机器学习和深度学习。文章探讨了人工智能在 X 光、计算机断层扫描(CT)、磁共振成像(MRI)和超声波等各种模式的图像分割、分类、配准和重建中的应用。 背景医学图像分析在现代医疗保健中至关重要,有助于疾病的诊断、治疗和监测。方法讨论了生成式对抗网络(GAN)、迁移学习和联合学习等最新进展。综述评估了人工智能在医学影像分析中的优势和局限性,强调了可解释性、鲁棒性和可推广性在临床实践中的重要性。文章最后探讨了未来的发展方向,包括个性化医疗、多模态融合、实时分析以及与电子健康记录(EHR)的无缝集成。它对研究人员、临床医生和政策制定者都有影响,强调了人工智能在加强病人护理方面的变革潜力。
{"title":"Role of Artificial Intelligence in Medical Image Analysis: A Review of Current Trends and Future Directions","authors":"Xin Li, Lei Zhang, Jingsi Yang, Fei Teng","doi":"10.1007/s40846-024-00863-x","DOIUrl":"https://doi.org/10.1007/s40846-024-00863-x","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>This review offers insight into AI’s current and future contributions to medical image analysis. The article highlights the challenges associated with manual image interpretation and introduces AI methodologies, including machine learning and deep learning. It explores AI’s applications in image segmentation, classification, registration, and reconstruction across various modalities like X-ray, computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound.</p><h3 data-test=\"abstract-sub-heading\">Background</h3><p>Medical image analysis is vital in modern healthcare, facilitating disease diagnosis, treatment, and monitoring. Integrating artificial intelligence (AI) techniques, particularly deep learning, has revolutionized this field.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>Recent advancements are discussed, such as generative adversarial networks (GANs), transfer learning, and federated learning. The review assesses the advantages and limitations of AI in medical image analysis, underscoring the importance of interpretability, robustness, and generalizability in clinical practice. Ethical considerations related to data privacy, bias, and regulatory aspects are also examined.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>The article concludes by exploring future directions, including personalized medicine, multi-modal fusion, real-time analysis, and seamless integration with electronic health records (EHRs).</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>This comprehensive review delineates artificial intelligence’s current and prospective role in medical image analysis. With implications for researchers, clinicians, and policymakers, it underscores AI’s transformative potential in enhancing patient care.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"13 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140617970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human vs Machine in Bioengineering Allergology: A Comparative Analysis of Conventional vs Innovative Methods for Quantifying Allergological Skin Prick Tests 生物工程过敏学中的人与机器:过敏学皮肤点刺试验量化的传统方法与创新方法的比较分析
IF 2 4区 医学 Q4 ENGINEERING, BIOMEDICAL Pub Date : 2024-04-14 DOI: 10.1007/s40846-024-00856-w
Stefano Palazzo, Nada Chaoul, Marcello Albanesi

Purpose

Immediate hypersensitivity reactions, commonly triggered by allergens, play a crucial role in clinical allergies. The skin prick test is the primary diagnostic tool for allergy, involving the application of an allergen drop on the forearm's volar surface. A sterile lancet is then used to cross the drop, observing the formation of a wheal if sensitized. In allergy practice, wheals are quantified using an arbitrary visual scale or methods such as the Dermographic Pen Method, involving a dermographic pen and graph paper, or a centimeter ruler. These methodologies are semi-quantitative, time-consuming, and operator-dependent. This study addresses the need for accurate and standardized quantification of SPT responses. We developed a Semi-Automated Method (SAM) for wheal detection to achieve this.

Methods

A cohort of 26 patients with respiratory allergies underwent SPTs with various allergens. Wheals were quantified using three methods: Arbitrary Visual Scale (AVSM), DPMM (Dermographic Pen Measurement Method), and the newly developed SAM. SAM utilized photographic detection and image analysis, and calculated major and minor diameters, mean diameter, wheal surface area, and skin index.

Results

Comparative analysis revealed SAM's superior performance in precision and efficiency compared to AVSM and DPMM. Mean surface measurements of histamine-generated wheals using SAM were significantly lower than those obtained with DPMM. Interestingly, SAM consistently demonstrated better performance across all tested allergens.

Conclusion

The introduction of SAM represents a significant advancement in allergy diagnostics. Its semi-automated approach enhances precision and facilitates long-term monitoring of SPT results. Through automation, SAM achieves accuracy in results and ease of use, notably improving allergy diagnostics.

目的过敏原通常会引发即刻超敏反应,在临床过敏症中起着至关重要的作用。皮肤点刺试验是诊断过敏症的主要工具,包括在前臂伏面滴上过敏原。然后用无菌柳叶刀穿过液滴,观察是否形成过敏性皮疹。在过敏治疗实践中,喘息的量化方法有任意视觉刻度法、皮肤笔法(包括皮肤笔和图纸)或厘米尺法。这些方法都是半定量的,耗时且依赖于操作者。本研究旨在满足对 SPT 反应进行准确、标准化量化的需求。方法一组 26 名呼吸道过敏患者接受了各种过敏原的 SPT。使用三种方法对乳清进行量化:任意视觉量表 (AVSM)、DPMM(皮肤笔测量法)和新开发的 SAM。结果比较分析表明,与 AVSM 和 DPMM 相比,SAM 在精确度和效率方面都更胜一筹。使用 SAM 测量组胺引起的喘息的平均表面积明显低于使用 DPMM 测量的平均表面积。有趣的是,在所有测试的过敏原中,SAM 始终表现出更好的性能。其半自动化方法提高了精确度,便于对 SPT 结果进行长期监测。通过自动化,SAM 实现了结果的准确性和易用性,显著改善了过敏诊断。
{"title":"Human vs Machine in Bioengineering Allergology: A Comparative Analysis of Conventional vs Innovative Methods for Quantifying Allergological Skin Prick Tests","authors":"Stefano Palazzo, Nada Chaoul, Marcello Albanesi","doi":"10.1007/s40846-024-00856-w","DOIUrl":"https://doi.org/10.1007/s40846-024-00856-w","url":null,"abstract":"<h3 data-test=\"abstract-sub-heading\">Purpose</h3><p>Immediate hypersensitivity reactions, commonly triggered by allergens, play a crucial role in clinical allergies. The skin prick test is the primary diagnostic tool for allergy, involving the application of an allergen drop on the forearm's volar surface. A sterile lancet is then used to cross the drop, observing the formation of a wheal if sensitized. In allergy practice, wheals are quantified using an arbitrary visual scale or methods such as the Dermographic Pen Method, involving a dermographic pen and graph paper, or a centimeter ruler. These methodologies are semi-quantitative, time-consuming, and operator-dependent. This study addresses the need for accurate and standardized quantification of SPT responses. We developed a Semi-Automated Method (SAM) for wheal detection to achieve this.</p><h3 data-test=\"abstract-sub-heading\">Methods</h3><p>A cohort of 26 patients with respiratory allergies underwent SPTs with various allergens. Wheals were quantified using three methods: Arbitrary Visual Scale (AVSM), DPMM (Dermographic Pen Measurement Method), and the newly developed SAM. SAM utilized photographic detection and image analysis, and calculated major and minor diameters, mean diameter, wheal surface area, and skin index.</p><h3 data-test=\"abstract-sub-heading\">Results</h3><p>Comparative analysis revealed SAM's superior performance in precision and efficiency compared to AVSM and DPMM. Mean surface measurements of histamine-generated wheals using SAM were significantly lower than those obtained with DPMM. Interestingly, SAM consistently demonstrated better performance across all tested allergens.</p><h3 data-test=\"abstract-sub-heading\">Conclusion</h3><p>The introduction of SAM represents a significant advancement in allergy diagnostics. Its semi-automated approach enhances precision and facilitates long-term monitoring of SPT results. Through automation, SAM achieves accuracy in results and ease of use, notably improving allergy diagnostics.</p>","PeriodicalId":50133,"journal":{"name":"Journal of Medical and Biological Engineering","volume":"12 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140598907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical and Biological Engineering
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1