The imminent influx of astronomical data from upcoming ground-based all-sky surveys underscores the necessity for rapid and efficient deconvolution algorithms to mitigate atmospheric seeing effects. This paper presents three novel models that synergistically combine U-Net with an efficient transformer called Linformer and its variants. This proposed models are named as AstroLinformer (AL), AstroConvLinformer (ACL), and AstroInfoLinformer (AIL). This hybrid approach leverages the strengths of U-Net and a transformer. We get the U-Net’s proven excellence at hierarchical feature extraction and spatial reconstruction for images, perfectly complemented by the transformer’s ability to efficiently model the global context and long-range relationships that a standard U-Net struggles to capture. Our comprehensive analysis, benchmarked against several existing deep learning methods, demonstrates that the proposed models achieve better performance. Most significantly, they show a marked superiority in recovering key physical parameters of galaxies, exhibiting the lowest RMS errors in the estimation of ellipticity, Sérsic index, half-light radius, and intensity of half-light radius. The cross-data generalization tests confirm the models’ robustness to mismatched PSF and noise conditions, a critical feature for real-world applications. Although all three models performed exceptionally well, the AL model displayed notable robustness under both low and high noise conditions. This work provides a powerful and computationally efficient solution for enhancing the quality of ground-based survey data, directly benefiting high-precision science cases such as weak gravitational lensing and detailed galaxy evolution studies.
扫码关注我们
求助内容:
应助结果提醒方式:
