Jiaxin Ren , Wanzeng Liu , Jun Chen , Shunxi Yin , Yuan Tao
{"title":"Word2Scene: Efficient remote sensing image scene generation with only one word via hybrid intelligence and low-rank representation","authors":"Jiaxin Ren , Wanzeng Liu , Jun Chen , Shunxi Yin , Yuan Tao","doi":"10.1016/j.isprsjprs.2024.11.002","DOIUrl":null,"url":null,"abstract":"<div><div>To address the numerous challenges existing in current remote sensing scene generation methods, such as the difficulty in capturing complex interrelations among geographical features and the integration of implicit expert knowledge into generative models, this paper proposes an efficient method for generating remote sensing scenes using hybrid intelligence and low-rank representation, named Word2Scene, which can generate complex scenes with just one word. This approach combines geographic expert knowledge to optimize the remote sensing scene description, enhancing the accuracy and interpretability of the input descriptions. By employing a diffusion model based on hybrid intelligence and low-rank representation techniques, this method endows the diffusion model with the capability to understand remote sensing scene concepts and significantly improves the training efficiency of the diffusion model. This study also introduces the geographic scene holistic perceptual similarity (GSHPS), a novel evaluation metric that holistically assesses the performance of generative models from a global perspective. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art models in terms of remote sensing scene generation quality, efficiency, and realism. Compared to the original diffusion models, LPIPS decreased by 18.52% (from 0.81 to 0.66), and GSHPS increased by 28.57% (from 0.70 to 0.90), validating the effectiveness and advancement of our method. Moreover, Word2Scene is capable of generating remote sensing scenes not present in the training set, showcasing strong zero-shot capabilities. This provides a new perspective and solution for remote sensing image scene generation, with the potential to advance the development of remote sensing, geographic information systems, and related fields. Our code will be released at <span><span>https://github.com/jaycecd/Word2Scene</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50269,"journal":{"name":"ISPRS Journal of Photogrammetry and Remote Sensing","volume":"218 ","pages":"Pages 231-257"},"PeriodicalIF":10.6000,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ISPRS Journal of Photogrammetry and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0924271624004106","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"GEOGRAPHY, PHYSICAL","Score":null,"Total":0}
引用次数: 0
Abstract
To address the numerous challenges existing in current remote sensing scene generation methods, such as the difficulty in capturing complex interrelations among geographical features and the integration of implicit expert knowledge into generative models, this paper proposes an efficient method for generating remote sensing scenes using hybrid intelligence and low-rank representation, named Word2Scene, which can generate complex scenes with just one word. This approach combines geographic expert knowledge to optimize the remote sensing scene description, enhancing the accuracy and interpretability of the input descriptions. By employing a diffusion model based on hybrid intelligence and low-rank representation techniques, this method endows the diffusion model with the capability to understand remote sensing scene concepts and significantly improves the training efficiency of the diffusion model. This study also introduces the geographic scene holistic perceptual similarity (GSHPS), a novel evaluation metric that holistically assesses the performance of generative models from a global perspective. Experimental results demonstrate that our proposed method outperforms existing state-of-the-art models in terms of remote sensing scene generation quality, efficiency, and realism. Compared to the original diffusion models, LPIPS decreased by 18.52% (from 0.81 to 0.66), and GSHPS increased by 28.57% (from 0.70 to 0.90), validating the effectiveness and advancement of our method. Moreover, Word2Scene is capable of generating remote sensing scenes not present in the training set, showcasing strong zero-shot capabilities. This provides a new perspective and solution for remote sensing image scene generation, with the potential to advance the development of remote sensing, geographic information systems, and related fields. Our code will be released at https://github.com/jaycecd/Word2Scene.
期刊介绍:
The ISPRS Journal of Photogrammetry and Remote Sensing (P&RS) serves as the official journal of the International Society for Photogrammetry and Remote Sensing (ISPRS). It acts as a platform for scientists and professionals worldwide who are involved in various disciplines that utilize photogrammetry, remote sensing, spatial information systems, computer vision, and related fields. The journal aims to facilitate communication and dissemination of advancements in these disciplines, while also acting as a comprehensive source of reference and archive.
P&RS endeavors to publish high-quality, peer-reviewed research papers that are preferably original and have not been published before. These papers can cover scientific/research, technological development, or application/practical aspects. Additionally, the journal welcomes papers that are based on presentations from ISPRS meetings, as long as they are considered significant contributions to the aforementioned fields.
In particular, P&RS encourages the submission of papers that are of broad scientific interest, showcase innovative applications (especially in emerging fields), have an interdisciplinary focus, discuss topics that have received limited attention in P&RS or related journals, or explore new directions in scientific or professional realms. It is preferred that theoretical papers include practical applications, while papers focusing on systems and applications should include a theoretical background.