Haoxiang Tian;Xu Bai;Xuguang Zhu;Pattathal V. Arun;Jingxuan Mi;Dong Zhao
{"title":"CIGGAN: A Ground-Penetrating Radar Image Generation Method Based on Feature Fusion","authors":"Haoxiang Tian;Xu Bai;Xuguang Zhu;Pattathal V. Arun;Jingxuan Mi;Dong Zhao","doi":"10.1109/TGRS.2025.3544392","DOIUrl":null,"url":null,"abstract":"Monitoring and assessment of critical infrastructure, such as urban roadways, are essential for the overall economy. Roads and bridges are prone to subsurface deformations, leading to significant economic losses and casualties. Ground-penetrating radar (GPR) is widely used for its nondestructive testing capabilities to detect subsurface anomalies. However, acquiring sufficient training data poses a challenge for advancing deep learning applications in subsurface target detection. To address this, a combined image-guided generative adversarial network (CIGGAN) is proposed for generating GPR data with voids by combining various voids and backgrounds. CIGGAN enhances feature diversity by extracting features from combined images and fusing features, creating GPR data significantly different from the original. An evaluation criterion is also proposed for assessing the quality of generated images. This study employs two real GPR datasets from a GPR vehicle-mounted system to evaluate the performance of CIGGAN in GPR data generation. Additionally, two state-of-the-art (SOTA) detection models (Faster-RCNN and Retinanet) are used to test the effectiveness of CIGGAN-generated data for void detection. Results show that CIGGAN has robust generalization capabilities, adapting well to generating GPR data with a small sample size (approximately 100–200 images). Using CIGGAN-generated data, in addition to the original dataset, improved the <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula> scores on the first dataset by 5.82% and 9.22% for the first and second models, respectively. Similarly, the approach improved the <inline-formula> <tex-math>${F}1$ </tex-math></inline-formula> score on the second dataset by 3.62% and 2.48% for the first and second models, respectively. Experiments indicate that CIGGAN is a powerful tool for supporting deep learning in the GPR domain.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-18"},"PeriodicalIF":8.6000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10898081/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Monitoring and assessment of critical infrastructure, such as urban roadways, are essential for the overall economy. Roads and bridges are prone to subsurface deformations, leading to significant economic losses and casualties. Ground-penetrating radar (GPR) is widely used for its nondestructive testing capabilities to detect subsurface anomalies. However, acquiring sufficient training data poses a challenge for advancing deep learning applications in subsurface target detection. To address this, a combined image-guided generative adversarial network (CIGGAN) is proposed for generating GPR data with voids by combining various voids and backgrounds. CIGGAN enhances feature diversity by extracting features from combined images and fusing features, creating GPR data significantly different from the original. An evaluation criterion is also proposed for assessing the quality of generated images. This study employs two real GPR datasets from a GPR vehicle-mounted system to evaluate the performance of CIGGAN in GPR data generation. Additionally, two state-of-the-art (SOTA) detection models (Faster-RCNN and Retinanet) are used to test the effectiveness of CIGGAN-generated data for void detection. Results show that CIGGAN has robust generalization capabilities, adapting well to generating GPR data with a small sample size (approximately 100–200 images). Using CIGGAN-generated data, in addition to the original dataset, improved the ${F}1$ scores on the first dataset by 5.82% and 9.22% for the first and second models, respectively. Similarly, the approach improved the ${F}1$ score on the second dataset by 3.62% and 2.48% for the first and second models, respectively. Experiments indicate that CIGGAN is a powerful tool for supporting deep learning in the GPR domain.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.