首页 > 最新文献

International Journal of Imaging Systems and Technology最新文献

英文 中文
A Federated Learning Framework for Brain Tumor Segmentation Without Sharing Patient Data 无需共享患者数据的脑肿瘤分割联合学习框架
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-29 DOI: 10.1002/ima.23147
Wei Zhang, Wei Jin, Seungmin Rho, Feng Jiang, Chi-fu Yang

Brain tumors pose a significant threat to human health, necessitating early detection and accurate diagnosis to enhance treatment outcomes. However, centralized data collection and processing encounter challenges related to privacy breaches and data integration due to the sensitivity and diversity of brain tumor patient data. In response, this paper proposes an innovative federated learning-based approach for brain tumor detection, facilitating multicenter data sharing while safeguarding individual data privacy. Our proposed federated learning architecture features each medical center as a participant, with each retaining local data and engaging in secure communication with a central server. Within this federated migration learning framework, each medical center independently trains a base model on its local data and transmits a fraction of the model's parameters to the central server. The central server leverages these parameters for model aggregation and knowledge sharing, facilitating the exchange and migration of models among participating medical centers. This collaborative approach empowers individual medical centers to share knowledge and experiences, thereby enhancing the performance and accuracy of the brain tumor detection model. To validate our federated learning model, we conduct comprehensive evaluations using an independent test dataset, comparing its performance with traditional centralized learning approaches. The experimental results underscore the superiority of the federated learning-based brain tumor detection approach, achieving heightened detection performance compared with traditional methods while meticulously preserving data privacy. In conclusion, our study presents an innovative solution for effective data collaboration and privacy protection in the realm of brain tumor detection, holding promising clinical applications. The federated learning approach not only advances detection accuracy but also establishes a secure and privacy-preserving foundation for collaborative research in medical imaging.

脑肿瘤对人类健康构成重大威胁,需要及早发现和准确诊断,以提高治疗效果。然而,由于脑肿瘤患者数据的敏感性和多样性,集中式数据收集和处理遇到了隐私泄露和数据整合方面的挑战。为此,本文提出了一种基于联合学习的脑肿瘤检测创新方法,在促进多中心数据共享的同时保护个人数据隐私。我们提出的联合学习架构的特点是,每个医疗中心都是参与者,每个医疗中心都保留本地数据,并与中央服务器进行安全通信。在这一联合迁移学习框架内,每个医疗中心根据本地数据独立训练基础模型,并将模型参数的一部分传输到中央服务器。中央服务器利用这些参数进行模型汇总和知识共享,促进参与医疗中心之间的模型交流和迁移。这种协作方法使各个医疗中心能够共享知识和经验,从而提高脑肿瘤检测模型的性能和准确性。为了验证我们的联合学习模型,我们使用独立的测试数据集进行了全面评估,并将其性能与传统的集中式学习方法进行了比较。实验结果凸显了基于联合学习的脑肿瘤检测方法的优越性,与传统方法相比,它的检测性能更高,同时还能细致地保护数据隐私。总之,我们的研究为脑肿瘤检测领域的有效数据协作和隐私保护提出了一种创新解决方案,具有广阔的临床应用前景。联合学习方法不仅提高了检测精度,还为医学影像领域的合作研究奠定了安全和保护隐私的基础。
{"title":"A Federated Learning Framework for Brain Tumor Segmentation Without Sharing Patient Data","authors":"Wei Zhang,&nbsp;Wei Jin,&nbsp;Seungmin Rho,&nbsp;Feng Jiang,&nbsp;Chi-fu Yang","doi":"10.1002/ima.23147","DOIUrl":"10.1002/ima.23147","url":null,"abstract":"<div>\u0000 \u0000 <p>Brain tumors pose a significant threat to human health, necessitating early detection and accurate diagnosis to enhance treatment outcomes. However, centralized data collection and processing encounter challenges related to privacy breaches and data integration due to the sensitivity and diversity of brain tumor patient data. In response, this paper proposes an innovative federated learning-based approach for brain tumor detection, facilitating multicenter data sharing while safeguarding individual data privacy. Our proposed federated learning architecture features each medical center as a participant, with each retaining local data and engaging in secure communication with a central server. Within this federated migration learning framework, each medical center independently trains a base model on its local data and transmits a fraction of the model's parameters to the central server. The central server leverages these parameters for model aggregation and knowledge sharing, facilitating the exchange and migration of models among participating medical centers. This collaborative approach empowers individual medical centers to share knowledge and experiences, thereby enhancing the performance and accuracy of the brain tumor detection model. To validate our federated learning model, we conduct comprehensive evaluations using an independent test dataset, comparing its performance with traditional centralized learning approaches. The experimental results underscore the superiority of the federated learning-based brain tumor detection approach, achieving heightened detection performance compared with traditional methods while meticulously preserving data privacy. In conclusion, our study presents an innovative solution for effective data collaboration and privacy protection in the realm of brain tumor detection, holding promising clinical applications. The federated learning approach not only advances detection accuracy but also establishes a secure and privacy-preserving foundation for collaborative research in medical imaging.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141864855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HCN: Hybrid Capsule Network for Fetal Plane Classification in Ultrasound Images HCN:用于超声图像胎儿平面分类的混合胶囊网络
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-23 DOI: 10.1002/ima.23149
Sourav Kumar Tanwar, Prakash Choudhary,  Priyanka, Tarun Agrawal

Classifying fetal ultrasound images into different anatomical categories, such as the abdomen, brain, femur, thorax, and so forth can contribute to the early identification of potential anomalies or dangers during prenatal care. Ignoring major abnormalities that might lead to fetal death or permanent disability. This article proposes a novel hybrid capsule network architecture-based method for identifying fetal ultrasound images. The proposed architecture increases the precision of fetal image categorization by combining the benefits of a capsule network with a convolutional neural network. The proposed hybrid model surpasses conventional convolutional network-based techniques with an overall accuracy of 0.989 when tested on a publicly accessible dataset of prenatal ultrasound images. The results indicate that the proposed hybrid architecture is a promising approach for precisely and consistently classifying fetal ultrasound images, with potential uses in clinical settings.

将胎儿超声图像分为不同的解剖类别,如腹部、大脑、股骨、胸部等,有助于在产前检查中及早发现潜在的异常或危险。忽视重大异常可能导致胎儿死亡或终身残疾。本文提出了一种基于混合胶囊网络架构的新型胎儿超声图像识别方法。所提出的架构结合了胶囊网络和卷积神经网络的优点,提高了胎儿图像分类的精确度。在一个公开的产前超声图像数据集上进行测试时,所提出的混合模型超越了传统的基于卷积网络的技术,总体准确率达到 0.989。结果表明,所提出的混合架构是精确、一致地对胎儿超声图像进行分类的一种有前途的方法,在临床环境中具有潜在的用途。
{"title":"HCN: Hybrid Capsule Network for Fetal Plane Classification in Ultrasound Images","authors":"Sourav Kumar Tanwar,&nbsp;Prakash Choudhary,&nbsp; Priyanka,&nbsp;Tarun Agrawal","doi":"10.1002/ima.23149","DOIUrl":"10.1002/ima.23149","url":null,"abstract":"<div>\u0000 \u0000 <p>Classifying fetal ultrasound images into different anatomical categories, such as the abdomen, brain, femur, thorax, and so forth can contribute to the early identification of potential anomalies or dangers during prenatal care. Ignoring major abnormalities that might lead to fetal death or permanent disability. This article proposes a novel hybrid capsule network architecture-based method for identifying fetal ultrasound images. The proposed architecture increases the precision of fetal image categorization by combining the benefits of a capsule network with a convolutional neural network. The proposed hybrid model surpasses conventional convolutional network-based techniques with an overall accuracy of 0.989 when tested on a publicly accessible dataset of prenatal ultrasound images. The results indicate that the proposed hybrid architecture is a promising approach for precisely and consistently classifying fetal ultrasound images, with potential uses in clinical settings.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141851486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Retinal Blood Vessels Segmentation With Improved SE-UNet Model 利用改进的 SE-UNet 模型进行视网膜血管分割
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-19 DOI: 10.1002/ima.23145
Yibo Wan, Gaofeng Wei, Renxing Li, Yifan Xiang, Dechao Yin, Minglei Yang, Deren Gong, Jiangang Chen

Accurate segmentation of retinal vessels is crucial for the early diagnosis and treatment of eye diseases, for example, diabetic retinopathy, glaucoma, and macular degeneration. Due to the intricate structure of retinal vessels, it is essential to extract their features with precision for the semantic segmentation of medical images. In this study, an improved deep learning neural network was developed with a focus on feature extraction based on the U-Net structure. The enhanced U-Net combines the architecture of convolutional neural networks (CNNs) with SE blocks (squeeze-and-excitation blocks) to adaptively extract image features after each U-Net encoder's convolution. This approach aids in suppressing nonvascular regions and highlighting features for specific segmentation tasks. The proposed method was trained and tested on the DRIVECHASE_DB1 and STARE datasets. As a result, the proposed model had an algorithmic accuracy, sensitivity, specificity, Dice coefficient (Dc), and Matthews correlation coefficient (MCC) of 95.62/0.9853/0.9652, 0.7751/0.7976/0.7773, 0.9832/0.8567/0.9865, 82.53/87.23/83.42, and 0.7823/0.7987/0.8345, respectively, outperforming previous methods, including UNet++, attention U-Net, and ResUNet. The experimental results demonstrated that the proposed method improved the retinal vessel segmentation performance.

准确分割视网膜血管对于早期诊断和治疗糖尿病视网膜病变、青光眼和黄斑变性等眼科疾病至关重要。由于视网膜血管结构复杂,因此必须精确提取其特征,以便对医学图像进行语义分割。本研究开发了一种改进的深度学习神经网络,重点是基于 U-Net 结构的特征提取。增强型 U-Net 将卷积神经网络(CNN)的架构与 SE 块(挤压-激发块)相结合,在每个 U-Net 编码器卷积后自适应地提取图像特征。这种方法有助于抑制非血管区域,突出特定分割任务的特征。所提出的方法在 DRIVECHASE_DB1 和 STARE 数据集上进行了训练和测试。结果表明,所提模型的算法准确性、灵敏度、特异性、Dice系数(Dc)和马太相关系数(MCC)分别为 95.62/0.9853/0.9652、0.7751/0.7976/0.7773、0.9832/0.8567/0.9865、82.53/87.23/83.42 和 0.7823/0.7987/0.8345,优于以前的方法,包括 UNet++、attention U-Net 和 ResUNet。实验结果表明,所提出的方法提高了视网膜血管的分割性能。
{"title":"Retinal Blood Vessels Segmentation With Improved SE-UNet Model","authors":"Yibo Wan,&nbsp;Gaofeng Wei,&nbsp;Renxing Li,&nbsp;Yifan Xiang,&nbsp;Dechao Yin,&nbsp;Minglei Yang,&nbsp;Deren Gong,&nbsp;Jiangang Chen","doi":"10.1002/ima.23145","DOIUrl":"https://doi.org/10.1002/ima.23145","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate segmentation of retinal vessels is crucial for the early diagnosis and treatment of eye diseases, for example, diabetic retinopathy, glaucoma, and macular degeneration. Due to the intricate structure of retinal vessels, it is essential to extract their features with precision for the semantic segmentation of medical images. In this study, an improved deep learning neural network was developed with a focus on feature extraction based on the U-Net structure. The enhanced U-Net combines the architecture of convolutional neural networks (CNNs) with SE blocks (squeeze-and-excitation blocks) to adaptively extract image features after each U-Net encoder's convolution. This approach aids in suppressing nonvascular regions and highlighting features for specific segmentation tasks. The proposed method was trained and tested on the DRIVECHASE_DB1 and STARE datasets. As a result, the proposed model had an algorithmic accuracy, sensitivity, specificity, Dice coefficient (Dc), and Matthews correlation coefficient (MCC) of 95.62/0.9853/0.9652, 0.7751/0.7976/0.7773, 0.9832/0.8567/0.9865, 82.53/87.23/83.42, and 0.7823/0.7987/0.8345, respectively, outperforming previous methods, including UNet++, attention U-Net, and ResUNet. The experimental results demonstrated that the proposed method improved the retinal vessel segmentation performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141730269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiscale Feature Fusion Method for Liver Cirrhosis Classification 肝硬化分类的多尺度特征融合方法
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-17 DOI: 10.1002/ima.23143
Shanshan Wang, Ling Jian, Kaiyan Li, Pingping Zhou, Liang Zeng

Liver cirrhosis is one of the most common liver diseases in the world, posing a threat to people's daily lives. In advanced stages, cirrhosis can lead to severe symptoms and complications, making early detection and treatment crucial. This study aims to address this critical healthcare challenge by improving the accuracy of liver cirrhosis classification using ultrasound imaging, thereby assisting medical professionals in early diagnosis and intervention. This article proposes a new multiscale feature fusion network model (MSFNet), which uses the feature extraction module to capture multiscale features from ultrasound images. This approach enables the neural network to utilize richer information to accurately classify the stage of cirrhosis. In addition, a new loss function is proposed to solve the class imbalance problem in medical datasets, which makes the model pay more attention to the samples that are difficult to classify and improves the performance of the model. The effectiveness of the proposed MSFNet was evaluated using ultrasound images from 61 subjects. Experimental results demonstrate that our method achieves high classification accuracy, with 98.08% on convex array datasets and 97.60% on linear array datasets. Our proposed method can classify early, middle, and late cirrhosis very accurately. It provides valuable insights for the clinical treatment of liver cirrhosis and may be helpful for the rehabilitation of patients.

肝硬化是世界上最常见的肝病之一,对人们的日常生活构成威胁。肝硬化晚期可导致严重的症状和并发症,因此早期发现和治疗至关重要。本研究旨在利用超声波成像提高肝硬化分类的准确性,从而帮助医疗专业人员进行早期诊断和干预,从而应对这一严峻的医疗挑战。本文提出了一种新的多尺度特征融合网络模型(MSFNet),它使用特征提取模块从超声图像中捕捉多尺度特征。这种方法能使神经网络利用更丰富的信息准确地对肝硬化分期进行分类。此外,还提出了一种新的损失函数来解决医疗数据集中的类不平衡问题,使模型更加关注难以分类的样本,提高了模型的性能。我们使用 61 名受试者的超声图像对所提出的 MSFNet 的有效性进行了评估。实验结果表明,我们的方法达到了很高的分类准确率,在凸阵列数据集上为 98.08%,在线性阵列数据集上为 97.60%。我们提出的方法可以非常准确地对早期、中期和晚期肝硬化进行分类。它为肝硬化的临床治疗提供了有价值的见解,并可能有助于患者的康复。
{"title":"Multiscale Feature Fusion Method for Liver Cirrhosis Classification","authors":"Shanshan Wang,&nbsp;Ling Jian,&nbsp;Kaiyan Li,&nbsp;Pingping Zhou,&nbsp;Liang Zeng","doi":"10.1002/ima.23143","DOIUrl":"https://doi.org/10.1002/ima.23143","url":null,"abstract":"<div>\u0000 \u0000 <p>Liver cirrhosis is one of the most common liver diseases in the world, posing a threat to people's daily lives. In advanced stages, cirrhosis can lead to severe symptoms and complications, making early detection and treatment crucial. This study aims to address this critical healthcare challenge by improving the accuracy of liver cirrhosis classification using ultrasound imaging, thereby assisting medical professionals in early diagnosis and intervention. This article proposes a new multiscale feature fusion network model (MSFNet), which uses the feature extraction module to capture multiscale features from ultrasound images. This approach enables the neural network to utilize richer information to accurately classify the stage of cirrhosis. In addition, a new loss function is proposed to solve the class imbalance problem in medical datasets, which makes the model pay more attention to the samples that are difficult to classify and improves the performance of the model. The effectiveness of the proposed MSFNet was evaluated using ultrasound images from 61 subjects. Experimental results demonstrate that our method achieves high classification accuracy, with 98.08% on convex array datasets and 97.60% on linear array datasets. Our proposed method can classify early, middle, and late cirrhosis very accurately. It provides valuable insights for the clinical treatment of liver cirrhosis and may be helpful for the rehabilitation of patients.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Skin Disease Diagnosis Through Deep Learning: A Comprehensive Study on Dermoscopic Image Preprocessing and Classification 通过深度学习加强皮肤病诊断:皮肤镜图像预处理与分类综合研究
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-17 DOI: 10.1002/ima.23148
Elif Nur Haner Kırğıl, Çağatay Berke Erdaş

Skin cancer occurs when abnormal cells in the top layer of the skin, known as the epidermis, undergo uncontrolled growth due to unrepaired DNA damage, leading to the development of mutations. These mutations lead to rapid cell growth and development of cancerous tumors. The type of cancerous tumor depends on the cells of origin. Overexposure to ultraviolet rays from the sun, tanning beds, or sunlamps is a primary factor in the occurrence of skin cancer. Since skin cancer is one of the most common types of cancer and has a high mortality, early diagnosis is extremely important. The dermatology literature has many studies of computer-aided diagnosis for early and highly accurate skin cancer detection. In this study, the classification of skin cancer was provided by Regnet x006, EfficientNetv2 B0, and InceptionResnetv2 deep learning methods. To increase the classification performance, hairs and black pixels in the corners due to the nature of dermoscopic images, which could create noise for deep learning, were eliminated in the preprocessing step. Preprocessing was done by hair removal, cropping, segmentation, and applying a median filter to dermoscopic images. To measure the performance of the proposed preprocessing technique, the results were obtained with both raw images and preprocessed images. The model developed to provide a solution to the classification problem is based on deep learning architectures. In the four experiments carried out within the scope of the study, classification was made for the eight classes in the dataset, squamous cell carcinoma and basal cell carcinoma classification, benign keratosis and actinic keratosis classification, and finally benign and malignant disease classification. According to the results obtained, the best accuracy values of the experiments were obtained as 0.858, 0.929, 0.917, and 0.906, respectively. The study underscores the significance of early and accurate diagnosis in addressing skin cancer, a prevalent and potentially fatal condition. The primary aim of the preprocessing procedures was to attain enhanced performance results by concentrating solely on the area spanning the lesion instead of analyzing the complete image. Combining the suggested preprocessing strategy with deep learning techniques shows potential for enhancing skin cancer diagnosis, particularly in terms of sensitivity and specificity.

当皮肤表层(即表皮层)的异常细胞因 DNA 损伤未修复而失控生长,导致突变时,就会发生皮肤癌。这些突变会导致细胞快速生长并形成癌瘤。癌症肿瘤的类型取决于起源细胞。过度暴露于阳光、日光浴床或太阳灯中的紫外线是导致皮肤癌的主要因素。皮肤癌是最常见的癌症类型之一,死亡率很高,因此早期诊断极为重要。皮肤病学文献中有许多关于计算机辅助诊断的研究,用于早期和高度准确地检测皮肤癌。本研究采用 Regnet x006、EfficientNetv2 B0 和 InceptionResnetv2 深度学习方法对皮肤癌进行分类。为了提高分类性能,在预处理步骤中剔除了皮肤镜图像中的毛发和角落里的黑色像素,因为皮肤镜图像的特性可能会给深度学习带来噪音。预处理是通过去除毛发、裁剪、分割以及对皮肤镜图像应用中值滤波器来完成的。为了衡量所提议的预处理技术的性能,对原始图像和预处理图像都进行了处理。为解决分类问题而开发的模型基于深度学习架构。在研究范围内进行的四次实验中,对数据集中的八个类别进行了分类,包括鳞状细胞癌和基底细胞癌分类、良性角化病和日光性角化病分类,以及良性和恶性疾病分类。实验结果表明,最佳准确率分别为 0.858、0.929、0.917 和 0.906。这项研究强调了早期准确诊断对于解决皮肤癌这一普遍存在且可能致命的疾病的重要意义。预处理程序的主要目的是通过只集中分析病变区域而不是分析整个图像来提高性能结果。将建议的预处理策略与深度学习技术相结合,显示出了增强皮肤癌诊断的潜力,尤其是在灵敏度和特异性方面。
{"title":"Enhancing Skin Disease Diagnosis Through Deep Learning: A Comprehensive Study on Dermoscopic Image Preprocessing and Classification","authors":"Elif Nur Haner Kırğıl,&nbsp;Çağatay Berke Erdaş","doi":"10.1002/ima.23148","DOIUrl":"https://doi.org/10.1002/ima.23148","url":null,"abstract":"<p>Skin cancer occurs when abnormal cells in the top layer of the skin, known as the epidermis, undergo uncontrolled growth due to unrepaired DNA damage, leading to the development of mutations. These mutations lead to rapid cell growth and development of cancerous tumors. The type of cancerous tumor depends on the cells of origin. Overexposure to ultraviolet rays from the sun, tanning beds, or sunlamps is a primary factor in the occurrence of skin cancer. Since skin cancer is one of the most common types of cancer and has a high mortality, early diagnosis is extremely important. The dermatology literature has many studies of computer-aided diagnosis for early and highly accurate skin cancer detection. In this study, the classification of skin cancer was provided by Regnet x006, EfficientNetv2 B0, and InceptionResnetv2 deep learning methods. To increase the classification performance, hairs and black pixels in the corners due to the nature of dermoscopic images, which could create noise for deep learning, were eliminated in the preprocessing step. Preprocessing was done by hair removal, cropping, segmentation, and applying a median filter to dermoscopic images. To measure the performance of the proposed preprocessing technique, the results were obtained with both raw images and preprocessed images. The model developed to provide a solution to the classification problem is based on deep learning architectures. In the four experiments carried out within the scope of the study, classification was made for the eight classes in the dataset, squamous cell carcinoma and basal cell carcinoma classification, benign keratosis and actinic keratosis classification, and finally benign and malignant disease classification. According to the results obtained, the best accuracy values of the experiments were obtained as 0.858, 0.929, 0.917, and 0.906, respectively. The study underscores the significance of early and accurate diagnosis in addressing skin cancer, a prevalent and potentially fatal condition. The primary aim of the preprocessing procedures was to attain enhanced performance results by concentrating solely on the area spanning the lesion instead of analyzing the complete image. Combining the suggested preprocessing strategy with deep learning techniques shows potential for enhancing skin cancer diagnosis, particularly in terms of sensitivity and specificity.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23148","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional Neural Network-Based CT Image Segmentation of Kidney Tumours 基于卷积神经网络的肾脏肿瘤 CT 图像分割技术
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-17 DOI: 10.1002/ima.23142
Cong Hu, Wenwen Jiang, Tian Zhou, Chunting Wan, Aijun Zhu

Kidney tumours are one of the most common tumours in humans and the main current treatment is surgical removal. The CT images are usually manually segmented by a specialist for pre-operative planning, but this can be influenced by the surgeon's experience and skill and can be time-consuming. Due to the complex lesions and different morphologies of kidney tumours that make segmentation difficult, this article proposes a convolutional neural network-based automatic segmentation method for CT images of kidney tumours to address the most common problems of boundary blurring and false positives in tumour segmentation images. The method is highly accurate and reliable, and is used to assist doctors in surgical planning as well as diagnostic treatment, relieving medical pressure to a certain extent. The EfficientNetV2-UNet segmentation model proposed in this article includes three main parts: feature extractor, reconstruction network and Bayesian decision algorithm. Firstly, for the phenomenon of tumour false positives, the EfficientNetV2 feature extractor, which has high training accuracy and efficiency, is selected as the backbone network, which extracts shallow features such as tumour location, morphology and texture in the CT image by downsampling. Secondly, on the basis of the backbone network, the reconstruction network is designed, which mainly consists of conversion block, deconvolution block, convolution block and output block. Then, the up-sampling architecture is constructed to gradually recover the spatial resolution of the feature map, fully identify the contextual information and form a complete encoding–decoding structure. Multi-scale feature fusion is achieved by superimposing all levels of feature map channels on the left and right sides of the network, preventing the loss of details and performing accurate tumour segmentation. Finally, a Bayesian decision algorithm is designed for the edge blurring phenomenon of segmented tumours and cascaded over the output of the reconstruction network, combining the edge features of the original CT image and the segmented image for probability estimation, which is used to improve the accuracy of the model edge segmentation. Medical images in NII special format were converted to Numpy matrix format using python, and then more than 2000 CT images containing only kidney tumours were selected from the KiTS19 dataset as the dataset for the model, and the dimensions were standardised to 128 × 128, and the experimental results show that the model outperforms many other advanced models with good segmentation performance.

肾肿瘤是人类最常见的肿瘤之一,目前主要的治疗方法是手术切除。CT 图像通常由专科医生手动分割,以便进行术前规划,但这可能会受到外科医生经验和技术的影响,而且耗时较长。由于肾脏肿瘤病变复杂,形态各异,分割难度大,本文提出了一种基于卷积神经网络的肾脏肿瘤 CT 图像自动分割方法,以解决肿瘤分割图像中最常见的边界模糊和假阳性问题。该方法准确度高、可靠性强,可用于辅助医生进行手术规划和诊断治疗,在一定程度上缓解了医疗压力。本文提出的EfficientNetV2-UNet分割模型包括特征提取器、重建网络和贝叶斯决策算法三大部分。首先,针对肿瘤假阳性现象,选用训练精度高、效率高的EfficientNetV2特征提取器作为骨干网络,通过降采样提取CT图像中肿瘤位置、形态、纹理等浅层特征。其次,在骨干网络的基础上设计重构网络,主要由转换块、解卷积块、卷积块和输出块组成。然后,构建上采样结构,逐步恢复特征图的空间分辨率,充分识别上下文信息,形成完整的编码-解码结构。通过在网络左右两侧叠加各级特征图通道,实现多尺度特征融合,防止细节丢失,进行精确的肿瘤分割。最后,针对分割后肿瘤的边缘模糊现象设计了贝叶斯决策算法,并在重建网络的输出上进行级联,结合原始 CT 图像和分割后图像的边缘特征进行概率估计,用于提高模型边缘分割的准确性。利用python将NII特殊格式的医学图像转换为Numpy矩阵格式,然后从KiTS19数据集中选取2000多张只包含肾脏肿瘤的CT图像作为模型的数据集,并将尺寸标准化为128 × 128,实验结果表明,该模型优于许多其他先进模型,具有良好的分割性能。
{"title":"Convolutional Neural Network-Based CT Image Segmentation of Kidney Tumours","authors":"Cong Hu,&nbsp;Wenwen Jiang,&nbsp;Tian Zhou,&nbsp;Chunting Wan,&nbsp;Aijun Zhu","doi":"10.1002/ima.23142","DOIUrl":"https://doi.org/10.1002/ima.23142","url":null,"abstract":"<div>\u0000 \u0000 <p>Kidney tumours are one of the most common tumours in humans and the main current treatment is surgical removal. The CT images are usually manually segmented by a specialist for pre-operative planning, but this can be influenced by the surgeon's experience and skill and can be time-consuming. Due to the complex lesions and different morphologies of kidney tumours that make segmentation difficult, this article proposes a convolutional neural network-based automatic segmentation method for CT images of kidney tumours to address the most common problems of boundary blurring and false positives in tumour segmentation images. The method is highly accurate and reliable, and is used to assist doctors in surgical planning as well as diagnostic treatment, relieving medical pressure to a certain extent. The EfficientNetV2-UNet segmentation model proposed in this article includes three main parts: feature extractor, reconstruction network and Bayesian decision algorithm. Firstly, for the phenomenon of tumour false positives, the EfficientNetV2 feature extractor, which has high training accuracy and efficiency, is selected as the backbone network, which extracts shallow features such as tumour location, morphology and texture in the CT image by downsampling. Secondly, on the basis of the backbone network, the reconstruction network is designed, which mainly consists of conversion block, deconvolution block, convolution block and output block. Then, the up-sampling architecture is constructed to gradually recover the spatial resolution of the feature map, fully identify the contextual information and form a complete encoding–decoding structure. Multi-scale feature fusion is achieved by superimposing all levels of feature map channels on the left and right sides of the network, preventing the loss of details and performing accurate tumour segmentation. Finally, a Bayesian decision algorithm is designed for the edge blurring phenomenon of segmented tumours and cascaded over the output of the reconstruction network, combining the edge features of the original CT image and the segmented image for probability estimation, which is used to improve the accuracy of the model edge segmentation. Medical images in NII special format were converted to Numpy matrix format using python, and then more than 2000 CT images containing only kidney tumours were selected from the KiTS19 dataset as the dataset for the model, and the dimensions were standardised to 128 × 128, and the experimental results show that the model outperforms many other advanced models with good segmentation performance.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141639641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infusing Weighted Average Ensemble Diversity for Advanced Breast Cancer Detection 为高级乳腺癌检测注入加权平均集合多样性
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-16 DOI: 10.1002/ima.23146
Barsha Abhisheka, Saroj Kumar Biswas, Biswajit Purkayastha

Breast cancer is a widespread health threat for women globally, often difficult to detect early due to its asymptomatic nature. As the disease advances, treatment becomes intricate and costly, ultimately resulting in elevated fatality rates. Currently, despite the widespread use of advanced machine learning (ML) and deep learning (DL) techniques, a comprehensive diagnosis of breast cancer remains elusive. Most of the existing methods primarily utilize either attention-based deep models or models based on handcrafted features to capture and gather local details. However, both of these approaches lack the capability to offer essential local information for precise tumor detection. Additionally, the available breast cancer datasets suffer from class imbalance issue. Hence, this paper presents a novel weighted average ensemble network (WA-ENet) designed for early-stage breast cancer detection that leverages the ability of ensemble technique over single classifier-based models for more robust and accurate prediction. The proposed model employs a weighted average-based ensemble technique, combining predictions from three diverse classifiers. The optimal combination of weights is determined using the hill climbing (HC) algorithm. Moreover, the proposed model enhances overall system performance by integrating deep features and handcrafted features through the use of HOG, thereby providing precise local information. Additionally, the proposed work addresses class imbalance by incorporating borderline synthetic minority over-sampling technique (BSMOTE). It achieves 99.65% accuracy on BUSI and 97.48% on UDIAT datasets.

乳腺癌是全球妇女普遍面临的健康威胁,由于其无症状的特性,通常很难早期发现。随着病情的发展,治疗变得复杂而昂贵,最终导致死亡率升高。目前,尽管先进的机器学习(ML)和深度学习(DL)技术得到了广泛应用,但乳腺癌的全面诊断仍然难以实现。大多数现有方法主要利用基于注意力的深度模型或基于手工特征的模型来捕捉和收集局部细节。然而,这两种方法都无法提供精确检测肿瘤所需的重要局部信息。此外,现有的乳腺癌数据集还存在类不平衡问题。因此,本文提出了一种用于早期乳腺癌检测的新型加权平均集合网络(WA-ENet),与基于单一分类器的模型相比,它充分利用了集合技术的能力,从而实现更稳健、更准确的预测。该模型采用了基于加权平均的集合技术,结合了三个不同分类器的预测结果。权重的最佳组合是通过爬山(HC)算法确定的。此外,所提出的模型通过使用 HOG 将深度特征和手工特征整合在一起,从而提供精确的局部信息,从而提高了系统的整体性能。此外,所提出的工作还通过结合边界合成少数群体过度采样技术(BSMOTE)来解决类不平衡问题。该系统在 BUSI 数据集上的准确率达到 99.65%,在 UDIAT 数据集上的准确率达到 97.48%。
{"title":"Infusing Weighted Average Ensemble Diversity for Advanced Breast Cancer Detection","authors":"Barsha Abhisheka,&nbsp;Saroj Kumar Biswas,&nbsp;Biswajit Purkayastha","doi":"10.1002/ima.23146","DOIUrl":"https://doi.org/10.1002/ima.23146","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast cancer is a widespread health threat for women globally, often difficult to detect early due to its asymptomatic nature. As the disease advances, treatment becomes intricate and costly, ultimately resulting in elevated fatality rates. Currently, despite the widespread use of advanced machine learning (ML) and deep learning (DL) techniques, a comprehensive diagnosis of breast cancer remains elusive. Most of the existing methods primarily utilize either attention-based deep models or models based on handcrafted features to capture and gather local details. However, both of these approaches lack the capability to offer essential local information for precise tumor detection. Additionally, the available breast cancer datasets suffer from class imbalance issue. Hence, this paper presents a novel weighted average ensemble network (WA-ENet) designed for early-stage breast cancer detection that leverages the ability of ensemble technique over single classifier-based models for more robust and accurate prediction. The proposed model employs a weighted average-based ensemble technique, combining predictions from three diverse classifiers. The optimal combination of weights is determined using the hill climbing (HC) algorithm. Moreover, the proposed model enhances overall system performance by integrating deep features and handcrafted features through the use of HOG, thereby providing precise local information. Additionally, the proposed work addresses class imbalance by incorporating borderline synthetic minority over-sampling technique (BSMOTE). It achieves 99.65% accuracy on BUSI and 97.48% on UDIAT datasets.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141631188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Herbal Source of Synthesizing Contrast Agents for Magnetic Resonance Imaging 合成磁共振成像对比剂的新草药来源
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-10 DOI: 10.1002/ima.23136
Ali Yazdani, Ahmadreza Okhovat, Raheleh Doosti, Hamid Soltanian-Zadeh

This study explores the potential of halophytes, plants adapted to saline environments, as a novel source for developing herbal MRI contrast agents. Halophytes naturally accumulate various metals within their tissues. These metal ions, potentially complexed with organic molecules, are released into aqueous solutions prepared from the plants. We investigated the ability of these compounds to generate contrast enhancement in MRI using a sequential approach. First, aqueous extracts were prepared from seven selected halophytes, and their capacity to induce contrast in MR images was evaluated. Based on these initial findings, sample halophytes were chosen for further investigations. Second, chemical analysis revealed aluminum as the primary potent metal which enhances the contrast. Third, the halophyte extract was fractionated based on polarity, and the most polar fraction exhibited the strongest contrast-generating effect. Finally, the relaxivity of this fraction, a key parameter for MRI contrast agents, was measured. We propose that aluminum, likely complexed with a polar molecule within the plant extract, is responsible for the observed contrast enhancement in MRI.

本研究探讨了盐生植物(适应盐碱环境的植物)作为开发中药磁共振成像造影剂的新来源的潜力。盐生植物在其组织中自然积累了各种金属。这些可能与有机分子络合的金属离子会释放到从植物中制备的水溶液中。我们采用连续的方法研究了这些化合物在核磁共振成像中产生对比增强的能力。首先,我们从七种选定的卤叶植物中制备了水提取物,并评估了它们在磁共振成像中诱导对比的能力。根据这些初步研究结果,选择了一些卤叶植物样本进行进一步研究。其次,化学分析显示铝是增强对比度的主要有效金属。第三,根据极性对盐生植物提取物进行分馏,其中极性最强的馏分具有最强的造影效果。最后,测量了该部分的弛豫性,这是核磁共振成像造影剂的一个关键参数。我们认为,铝可能与植物提取物中的极性分子络合,是磁共振成像中观察到的对比度增强的原因。
{"title":"A New Herbal Source of Synthesizing Contrast Agents for Magnetic Resonance Imaging","authors":"Ali Yazdani,&nbsp;Ahmadreza Okhovat,&nbsp;Raheleh Doosti,&nbsp;Hamid Soltanian-Zadeh","doi":"10.1002/ima.23136","DOIUrl":"https://doi.org/10.1002/ima.23136","url":null,"abstract":"<div>\u0000 \u0000 <p>This study explores the potential of halophytes, plants adapted to saline environments, as a novel source for developing herbal MRI contrast agents. Halophytes naturally accumulate various metals within their tissues. These metal ions, potentially complexed with organic molecules, are released into aqueous solutions prepared from the plants. We investigated the ability of these compounds to generate contrast enhancement in MRI using a sequential approach. First, aqueous extracts were prepared from seven selected halophytes, and their capacity to induce contrast in MR images was evaluated. Based on these initial findings, sample halophytes were chosen for further investigations. Second, chemical analysis revealed aluminum as the primary potent metal which enhances the contrast. Third, the halophyte extract was fractionated based on polarity, and the most polar fraction exhibited the strongest contrast-generating effect. Finally, the relaxivity of this fraction, a key parameter for MRI contrast agents, was measured. We propose that aluminum, likely complexed with a polar molecule within the plant extract, is responsible for the observed contrast enhancement in MRI.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141596974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pythagorean Fuzzy Set for Enhancement of Low Contrast Mammogram Images 毕达哥拉斯模糊集用于增强低对比度乳腺 X 光图像
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-10 DOI: 10.1002/ima.23137
Tamalika Chaira, Arun Sarkar

Breast masses are often one of the primary signs of breast cancer, and precise segmentation of these masses is essential for accurate diagnosis and treatment planning. Diagnosis may be complex depending on the size and visibility of the mass. When the mass is not visible clearly, precise segmentation becomes very difficult and in that case enhancement is essential. Inadequate compression, patient movement, or paddle/breast movement during the exposure process might cause hazy mammogram images. Without enhancement, accurate segmentation and detection cannot be done. As there exists uncertainties in different regions, reducing uncertainty is still a main problem and so fuzzy methods may deal these uncertainties in a better way. Though there are many fuzzy and advanced fuzzy methods, we consider Pythagorean fuzzy set as one of the fuzzy sets that may be powerful to deal with uncertainty. This research proposes a new Pythagorean fuzzy methodology for mammography image enhancement. The image is first transformed into a fuzzy image, and the nonmembership function is then calculated using a newly created Pythagorean fuzzy generator. Membership function of Pythagorean fuzzy image is computed from nonmembership function. The plot between the membership value and the hesitation degree is used to calculate a constant term in the membership function. Next, an enhanced image is obtained by applying fuzzy intensification operator to the Pythagorean fuzzy image. The proposed method is compared qualitatively and quantitatively with those of non-fuzzy, intuitionistic fuzzy, Type 2 fuzzy, and Pythagorean fuzzy methods, it is found that the suggested method outperforms the other methods. To show the usefulness of the proposed enhanced method, segmentation is carried out on the enhanced images.

乳房肿块通常是乳腺癌的主要征兆之一,对这些肿块进行精确分割对于准确诊断和制定治疗计划至关重要。诊断可能很复杂,这取决于肿块的大小和可见度。当肿块看不清楚时,精确分割就会变得非常困难,在这种情况下,增强检查是必不可少的。在曝光过程中,压迫不足、患者移动或乳腺桨/乳房移动都可能导致乳腺 X 光图像模糊不清。如果不进行增强,就无法进行准确的分割和检测。由于不同区域存在不确定性,减少不确定性仍然是一个主要问题,因此模糊方法可以更好地处理这些不确定性。虽然有很多模糊方法和高级模糊方法,但我们认为毕达哥拉斯模糊集是其中一种可以有效处理不确定性的模糊集。本研究提出了一种新的毕达哥拉斯模糊方法,用于乳腺 X 射线图像增强。首先将图像转换为模糊图像,然后使用新创建的毕达哥拉斯模糊发生器计算非成员函数。根据非成员函数计算出毕达哥拉斯模糊图像的成员函数。成员值与犹豫度之间的关系图用于计算成员函数中的常数项。然后,通过对毕达哥拉斯模糊图像应用模糊增强算子,得到增强图像。将所提出的方法与非模糊方法、直觉模糊方法、2 类模糊方法和毕达哥拉斯模糊方法进行了定性和定量比较,发现所提出的方法优于其他方法。为了证明所建议的增强方法的实用性,对增强后的图像进行了分割。
{"title":"Pythagorean Fuzzy Set for Enhancement of Low Contrast Mammogram Images","authors":"Tamalika Chaira,&nbsp;Arun Sarkar","doi":"10.1002/ima.23137","DOIUrl":"https://doi.org/10.1002/ima.23137","url":null,"abstract":"<div>\u0000 \u0000 <p>Breast masses are often one of the primary signs of breast cancer, and precise segmentation of these masses is essential for accurate diagnosis and treatment planning. Diagnosis may be complex depending on the size and visibility of the mass. When the mass is not visible clearly, precise segmentation becomes very difficult and in that case enhancement is essential. Inadequate compression, patient movement, or paddle/breast movement during the exposure process might cause hazy mammogram images. Without enhancement, accurate segmentation and detection cannot be done. As there exists uncertainties in different regions, reducing uncertainty is still a main problem and so fuzzy methods may deal these uncertainties in a better way. Though there are many fuzzy and advanced fuzzy methods, we consider Pythagorean fuzzy set as one of the fuzzy sets that may be powerful to deal with uncertainty. This research proposes a new Pythagorean fuzzy methodology for mammography image enhancement. The image is first transformed into a fuzzy image, and the nonmembership function is then calculated using a newly created Pythagorean fuzzy generator. Membership function of Pythagorean fuzzy image is computed from nonmembership function. The plot between the membership value and the hesitation degree is used to calculate a constant term in the membership function. Next, an enhanced image is obtained by applying fuzzy intensification operator to the Pythagorean fuzzy image. The proposed method is compared qualitatively and quantitatively with those of non-fuzzy, intuitionistic fuzzy, Type 2 fuzzy, and Pythagorean fuzzy methods, it is found that the suggested method outperforms the other methods. To show the usefulness of the proposed enhanced method, segmentation is carried out on the enhanced images.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Species Segmentation of Animal Prostate Using a Human Prostate Dataset and Limited Preoperative Animal Images: A Sampled Experiment on Dog Prostate Tissue 使用人类前列腺数据集和术前有限的动物图像进行动物前列腺的跨物种分割:狗前列腺组织抽样实验
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-07-10 DOI: 10.1002/ima.23138
Yang Yang, Seong Young Ko

In the development of medical devices and surgical robot systems, animal models are often used for evaluation, necessitating accurate organ segmentation. Deep learning-based image segmentation provides a solution for automatic and precise organ segmentation. However, a significant challenge in this approach arises from the limited availability of training data for animal models. In contrast, human medical image datasets are readily available. To address this imbalance, this study proposes a fine-tuning approach that combines a limited set of animal model images with a comprehensive human image dataset. Various postprocessing algorithms were applied to ensure that the segmentation results met the positioning requirements for the evaluation of a medical robot under development. As one of the target applications, magnetic resonance images were used to determine the position of the dog's prostate, which was then used to determine the target location of the robot under development. The MSD TASK5 dataset was used as the human dataset for pretraining, which involved a modified U-Net network. Ninety-nine pretrained backbone networks were tested as encoders for U-Net. The cross-training validation was performed using the selected network backbone. The highest accuracy, with an IoU score of 0.949, was achieved using the independent validation set from the MSD TASK5 human dataset. Subsequently, fine-tuning was performed using a small set of dog prostate images, resulting in the highest accuracy of an IoU score of 0.961 across different cross-validation groups. The processed results demonstrate the feasibility of the proposed approach for accurate prostate segmentation.

在医疗设备和手术机器人系统的开发过程中,通常会使用动物模型进行评估,这就需要进行精确的器官分割。基于深度学习的图像分割为自动和精确的器官分割提供了一种解决方案。然而,这种方法面临的一个重大挑战是动物模型的训练数据有限。相比之下,人类医学图像数据集却很容易获得。为了解决这一不平衡问题,本研究提出了一种微调方法,将有限的动物模型图像集与全面的人类图像数据集结合起来。为确保分割结果符合正在开发中的医疗机器人评估所需的定位要求,我们采用了各种后处理算法。作为目标应用之一,磁共振图像被用来确定狗的前列腺位置,然后用来确定正在开发的机器人的目标位置。MSD TASK5 数据集被用作预训练的人类数据集,其中涉及一个改进的 U-Net 网络。作为 U-Net 的编码器,对 99 个预训练的骨干网络进行了测试。交叉训练验证使用选定的骨干网络进行。使用来自 MSD TASK5 人类数据集的独立验证集取得了最高的准确率,IoU 得分为 0.949。随后,使用一小组狗前列腺图像进行了微调,结果在不同的交叉验证组中获得了最高的准确率,IoU 得分为 0.961。处理结果证明了所提方法在准确分割前列腺方面的可行性。
{"title":"Cross-Species Segmentation of Animal Prostate Using a Human Prostate Dataset and Limited Preoperative Animal Images: A Sampled Experiment on Dog Prostate Tissue","authors":"Yang Yang,&nbsp;Seong Young Ko","doi":"10.1002/ima.23138","DOIUrl":"https://doi.org/10.1002/ima.23138","url":null,"abstract":"<div>\u0000 \u0000 <p>In the development of medical devices and surgical robot systems, animal models are often used for evaluation, necessitating accurate organ segmentation. Deep learning-based image segmentation provides a solution for automatic and precise organ segmentation. However, a significant challenge in this approach arises from the limited availability of training data for animal models. In contrast, human medical image datasets are readily available. To address this imbalance, this study proposes a fine-tuning approach that combines a limited set of animal model images with a comprehensive human image dataset. Various postprocessing algorithms were applied to ensure that the segmentation results met the positioning requirements for the evaluation of a medical robot under development. As one of the target applications, magnetic resonance images were used to determine the position of the dog's prostate, which was then used to determine the target location of the robot under development. The MSD TASK5 dataset was used as the human dataset for pretraining, which involved a modified U-Net network. Ninety-nine pretrained backbone networks were tested as encoders for U-Net. The cross-training validation was performed using the selected network backbone. The highest accuracy, with an IoU score of 0.949, was achieved using the independent validation set from the MSD TASK5 human dataset. Subsequently, fine-tuning was performed using a small set of dog prostate images, resulting in the highest accuracy of an IoU score of 0.961 across different cross-validation groups. The processed results demonstrate the feasibility of the proposed approach for accurate prostate segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 4","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141583835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Imaging Systems and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1