Microwave images with high spatiotemporal resolution are essential for observing and predicting tropical cyclones (TCs), including TC positioning, intensity estimation, and detection of concentric eyewall. Nevertheless, the temporal resolution of tropical cyclone microwave (TCMW) images is limited due to satellite quantity and orbit constraints, presenting a challenging problem for TC disaster forecasting. This research suggests a multi-sensor data fusion approach, using high-temporal-resolution tropical cyclone infrared (TCIR) images to generate synthetic TCMW images, offering a solution to this data scarcity problem. In particular, we introduce a deep learning network based on the Vision Transformer (TCA-ViT) to translate TCIR images into TCMW images. This can be viewed as a form of synthetic data generation, enhancing the available information for decision-making. We integrate a phase-based physical guidance mechanism into the training process. Furthermore, we have developed a dataset of TC infrared-to-microwave image conversions (TCIR2MW) for training and testing the model. Experimental results demonstrate the method’s capability in rapidly and accurately extracting key features of TCs. Leveraging techniques like Mask and Transfer Learning, it addresses the absence of TCMW images by generating MW images from IR images, thereby aiding downstream tasks like TC intensity and precipitation forecasting. This study introduces a novel approach to the field of TC image research, with the potential to advance deep learning in this direction and provide vital insights for real-time observation and prediction of global TCs. Our source code and data are publicly available online at https://github.com/kleenY/TCIR2MW.
扫码关注我们
求助内容:
应助结果提醒方式:
