{"title":"利用光的魔力:用于通用全息成像的空间相干指示旋转变压器","authors":"Xin Tong, Renjun Xu, Pengfei Xu, Zishuai Zeng, Shuxi Liu, Daomu Zhao","doi":"10.1117/1.ap.5.6.066003","DOIUrl":null,"url":null,"abstract":"Holographic imaging poses significant challenges when facing real-time disturbances introduced by dynamic environments. The existing deep-learning methods for holographic imaging often depend solely on the specific condition based on the given data distributions, thus hindering their generalization across multiple scenes. One critical problem is how to guarantee the alignment between any given downstream tasks and pretrained models. We analyze the physical mechanism of image degradation caused by turbulence and innovatively propose a swin transformer-based method, termed train-with-coherence-swin (TWC-Swin) transformer, which uses spatial coherence (SC) as an adaptable physical prior information to precisely align image restoration tasks in the arbitrary turbulent scene. The light-processing system (LPR) we designed enables manipulation of SC and simulation of any turbulence. Qualitative and quantitative evaluations demonstrate that the TWC-Swin method presents superiority over traditional convolution frameworks and realizes image restoration under various turbulences, which suggests its robustness, powerful generalization capabilities, and adaptability to unknown environments. Our research reveals the significance of physical prior information in the optical intersection and provides an effective solution for model-to-tasks alignment schemes, which will help to unlock the full potential of deep learning for all-weather optical imaging across terrestrial, marine, and aerial domains.","PeriodicalId":33241,"journal":{"name":"Advanced Photonics","volume":null,"pages":null},"PeriodicalIF":20.6000,"publicationDate":"2023-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Harnessing the magic of light: spatial coherence instructed swin transformer for universal holographic imaging\",\"authors\":\"Xin Tong, Renjun Xu, Pengfei Xu, Zishuai Zeng, Shuxi Liu, Daomu Zhao\",\"doi\":\"10.1117/1.ap.5.6.066003\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Holographic imaging poses significant challenges when facing real-time disturbances introduced by dynamic environments. The existing deep-learning methods for holographic imaging often depend solely on the specific condition based on the given data distributions, thus hindering their generalization across multiple scenes. One critical problem is how to guarantee the alignment between any given downstream tasks and pretrained models. We analyze the physical mechanism of image degradation caused by turbulence and innovatively propose a swin transformer-based method, termed train-with-coherence-swin (TWC-Swin) transformer, which uses spatial coherence (SC) as an adaptable physical prior information to precisely align image restoration tasks in the arbitrary turbulent scene. The light-processing system (LPR) we designed enables manipulation of SC and simulation of any turbulence. Qualitative and quantitative evaluations demonstrate that the TWC-Swin method presents superiority over traditional convolution frameworks and realizes image restoration under various turbulences, which suggests its robustness, powerful generalization capabilities, and adaptability to unknown environments. Our research reveals the significance of physical prior information in the optical intersection and provides an effective solution for model-to-tasks alignment schemes, which will help to unlock the full potential of deep learning for all-weather optical imaging across terrestrial, marine, and aerial domains.\",\"PeriodicalId\":33241,\"journal\":{\"name\":\"Advanced Photonics\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":20.6000,\"publicationDate\":\"2023-10-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advanced Photonics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1117/1.ap.5.6.066003\",\"RegionNum\":1,\"RegionCategory\":\"物理与天体物理\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"OPTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Photonics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/1.ap.5.6.066003","RegionNum":1,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPTICS","Score":null,"Total":0}
Harnessing the magic of light: spatial coherence instructed swin transformer for universal holographic imaging
Holographic imaging poses significant challenges when facing real-time disturbances introduced by dynamic environments. The existing deep-learning methods for holographic imaging often depend solely on the specific condition based on the given data distributions, thus hindering their generalization across multiple scenes. One critical problem is how to guarantee the alignment between any given downstream tasks and pretrained models. We analyze the physical mechanism of image degradation caused by turbulence and innovatively propose a swin transformer-based method, termed train-with-coherence-swin (TWC-Swin) transformer, which uses spatial coherence (SC) as an adaptable physical prior information to precisely align image restoration tasks in the arbitrary turbulent scene. The light-processing system (LPR) we designed enables manipulation of SC and simulation of any turbulence. Qualitative and quantitative evaluations demonstrate that the TWC-Swin method presents superiority over traditional convolution frameworks and realizes image restoration under various turbulences, which suggests its robustness, powerful generalization capabilities, and adaptability to unknown environments. Our research reveals the significance of physical prior information in the optical intersection and provides an effective solution for model-to-tasks alignment schemes, which will help to unlock the full potential of deep learning for all-weather optical imaging across terrestrial, marine, and aerial domains.
期刊介绍:
Advanced Photonics is a highly selective, open-access, international journal that publishes innovative research in all areas of optics and photonics, including fundamental and applied research. The journal publishes top-quality original papers, letters, and review articles, reflecting significant advances and breakthroughs in theoretical and experimental research and novel applications with considerable potential.
The journal seeks high-quality, high-impact articles across the entire spectrum of optics, photonics, and related fields with specific emphasis on the following acceptance criteria:
-New concepts in terms of fundamental research with great impact and significance
-State-of-the-art technologies in terms of novel methods for important applications
-Reviews of recent major advances and discoveries and state-of-the-art benchmarking.
The journal also publishes news and commentaries highlighting scientific and technological discoveries, breakthroughs, and achievements in optics, photonics, and related fields.