Enhancing Ultrasound Image Quality Across Disease Domains: Application of Cycle-Consistent Generative Adversarial Network and Perceptual Loss.

Shreeram Athreya, Ashwath Radhachandran, Vedrana Ivezić, Vivek R Sant, Corey W Arnold, William Speier
{"title":"Enhancing Ultrasound Image Quality Across Disease Domains: Application of Cycle-Consistent Generative Adversarial Network and Perceptual Loss.","authors":"Shreeram Athreya, Ashwath Radhachandran, Vedrana Ivezić, Vivek R Sant, Corey W Arnold, William Speier","doi":"10.2196/58911","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Numerous studies have explored image processing techniques aimed at enhancing ultrasound images to narrow the performance gap between low-quality portable devices and high-end ultrasound equipment. These investigations often use registered image pairs created by modifying the same image through methods like down sampling or adding noise, rather than using separate images from different machines. Additionally, they rely on organ-specific features, limiting the models' generalizability across various imaging conditions and devices. The challenge remains to develop a universal framework capable of improving image quality across different devices and conditions, independent of registration or specific organ characteristics.</p><p><strong>Objective: </strong>This study aims to develop a robust framework that enhances the quality of ultrasound images, particularly those captured with compact, portable devices, which are often constrained by low quality due to hardware limitations. The framework is designed to effectively process nonregistered ultrasound image pairs, a common challenge in medical imaging, across various clinical settings and device types. By addressing these challenges, the research seeks to provide a more generalized and adaptable solution that can be widely applied across diverse medical scenarios, improving the accessibility and quality of diagnostic imaging.</p><p><strong>Methods: </strong>A retrospective analysis was conducted by using a cycle-consistent generative adversarial network (CycleGAN) framework enhanced with perceptual loss to improve the quality of ultrasound images, focusing on nonregistered image pairs from various organ systems. The perceptual loss was integrated to preserve anatomical integrity by comparing deep features extracted from pretrained neural networks. The model's performance was evaluated against corresponding high-resolution images, ensuring that the enhanced outputs closely mimic those from high-end ultrasound devices. The model was trained and validated using a publicly available, diverse dataset to ensure robustness and generalizability across different imaging scenarios.</p><p><strong>Results: </strong>The advanced CycleGAN framework, enhanced with perceptual loss, significantly outperformed the previous state-of-the-art, stable CycleGAN, in multiple evaluation metrics. Specifically, our method achieved a structural similarity index of 0.2889 versus 0.2502 (P<.001), a peak signal-to-noise ratio of 15.8935 versus 14.9430 (P<.001), and a learned perceptual image patch similarity score of 0.4490 versus 0.5005 (P<.001). These results demonstrate the model's superior ability to enhance image quality while preserving critical anatomical details, thereby improving diagnostic usefulness.</p><p><strong>Conclusions: </strong>This study presents a significant advancement in ultrasound imaging by leveraging a CycleGAN model enhanced with perceptual loss to bridge the quality gap between images from different devices. By processing nonregistered image pairs, the model not only enhances visual quality but also ensures the preservation of essential anatomical structures, crucial for accurate diagnosis. This approach holds the potential to democratize high-quality ultrasound imaging, making it accessible through low-cost portable devices, thereby improving health care outcomes, particularly in resource-limited settings. Future research will focus on further validation and optimization for clinical use.</p>","PeriodicalId":87288,"journal":{"name":"JMIR biomedical engineering","volume":"9 ","pages":"e58911"},"PeriodicalIF":0.0000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11688586/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR biomedical engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/58911","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Numerous studies have explored image processing techniques aimed at enhancing ultrasound images to narrow the performance gap between low-quality portable devices and high-end ultrasound equipment. These investigations often use registered image pairs created by modifying the same image through methods like down sampling or adding noise, rather than using separate images from different machines. Additionally, they rely on organ-specific features, limiting the models' generalizability across various imaging conditions and devices. The challenge remains to develop a universal framework capable of improving image quality across different devices and conditions, independent of registration or specific organ characteristics.

Objective: This study aims to develop a robust framework that enhances the quality of ultrasound images, particularly those captured with compact, portable devices, which are often constrained by low quality due to hardware limitations. The framework is designed to effectively process nonregistered ultrasound image pairs, a common challenge in medical imaging, across various clinical settings and device types. By addressing these challenges, the research seeks to provide a more generalized and adaptable solution that can be widely applied across diverse medical scenarios, improving the accessibility and quality of diagnostic imaging.

Methods: A retrospective analysis was conducted by using a cycle-consistent generative adversarial network (CycleGAN) framework enhanced with perceptual loss to improve the quality of ultrasound images, focusing on nonregistered image pairs from various organ systems. The perceptual loss was integrated to preserve anatomical integrity by comparing deep features extracted from pretrained neural networks. The model's performance was evaluated against corresponding high-resolution images, ensuring that the enhanced outputs closely mimic those from high-end ultrasound devices. The model was trained and validated using a publicly available, diverse dataset to ensure robustness and generalizability across different imaging scenarios.

Results: The advanced CycleGAN framework, enhanced with perceptual loss, significantly outperformed the previous state-of-the-art, stable CycleGAN, in multiple evaluation metrics. Specifically, our method achieved a structural similarity index of 0.2889 versus 0.2502 (P<.001), a peak signal-to-noise ratio of 15.8935 versus 14.9430 (P<.001), and a learned perceptual image patch similarity score of 0.4490 versus 0.5005 (P<.001). These results demonstrate the model's superior ability to enhance image quality while preserving critical anatomical details, thereby improving diagnostic usefulness.

Conclusions: This study presents a significant advancement in ultrasound imaging by leveraging a CycleGAN model enhanced with perceptual loss to bridge the quality gap between images from different devices. By processing nonregistered image pairs, the model not only enhances visual quality but also ensures the preservation of essential anatomical structures, crucial for accurate diagnosis. This approach holds the potential to democratize high-quality ultrasound imaging, making it accessible through low-cost portable devices, thereby improving health care outcomes, particularly in resource-limited settings. Future research will focus on further validation and optimization for clinical use.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
提高超声图像质量跨越疾病域:循环一致生成对抗网络和感知损失的应用。
背景:大量研究探索了图像处理技术,旨在增强超声图像,以缩小低质量便携式设备与高端超声设备之间的性能差距。这些调查通常使用通过降低采样或添加噪声等方法修改同一图像而创建的注册图像对,而不是使用来自不同机器的单独图像。此外,它们依赖于器官特异性特征,限制了模型在各种成像条件和设备中的通用性。挑战仍然是开发一个通用框架,能够改善不同设备和条件下的图像质量,独立于注册或特定器官特征。目的:本研究旨在开发一个强大的框架,以提高超声图像的质量,特别是那些用紧凑的便携式设备捕获的图像,由于硬件限制,这些设备通常受到低质量的限制。该框架旨在有效地处理非注册超声图像对,这是医学成像中的常见挑战,跨越各种临床环境和设备类型。通过解决这些挑战,该研究寻求提供一种更通用和适应性更强的解决方案,可以广泛应用于不同的医疗场景,提高诊断成像的可及性和质量。方法:采用周期一致生成对抗网络(CycleGAN)框架进行回顾性分析,增强感知损失,以提高超声图像的质量,重点关注来自不同器官系统的未注册图像对。通过比较从预训练的神经网络中提取的深度特征,整合感知损失以保持解剖完整性。该模型的性能根据相应的高分辨率图像进行评估,确保增强的输出与高端超声设备的输出非常接近。该模型使用公开可用的多样化数据集进行训练和验证,以确保在不同成像场景下的稳健性和通用性。结果:先进的CycleGAN框架,增强了感知损失,在多个评估指标上明显优于之前最先进的,稳定的CycleGAN。具体来说,我们的方法实现了0.2889与0.2502的结构相似指数(pp结论:本研究通过利用增强感知损失的CycleGAN模型来弥补不同设备图像之间的质量差距,在超声成像方面取得了重大进展。通过处理非配准图像对,该模型不仅提高了视觉质量,而且确保了对准确诊断至关重要的基本解剖结构的保留。这种方法具有普及高质量超声成像的潜力,使其能够通过低成本的便携式设备获得,从而改善医疗保健结果,特别是在资源有限的环境中。未来的研究将集中于进一步验证和优化临床应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
审稿时长
20 weeks
期刊最新文献
Home Automated Telemanagement System for Individualized Exercise Programs: Design and Usability Evaluation. Pump-Free Microfluidics for Cell Concentration Analysis on Smartphones in Clinical Settings (SmartFlow): Design, Development, and Evaluation. Enhancing Ultrasound Image Quality Across Disease Domains: Application of Cycle-Consistent Generative Adversarial Network and Perceptual Loss. Validation of a Wearable Sensor Prototype for Measuring Heart Rate to Prescribe Physical Activity: Cross-Sectional Exploratory Study. User Perceptions of Wearability of Knitted Sensor Garments for Long-Term Monitoring of Breathing Health: Thematic Analysis of Focus Groups and a Questionnaire Survey.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1