{"title":"事件引导图像超分辨率的分离式跨模态融合","authors":"Minjie Liu;Hongjian Wang;Kuk-Jin Yoon;Lin Wang","doi":"10.1109/TAI.2024.3418376","DOIUrl":null,"url":null,"abstract":"Event cameras detect the intensity changes and produce asynchronous events with high dynamic range and no motion blur. Recently, several attempts have been made to superresolve the intensity images guided by events. However, these methods directly fuse the event and image features without distinguishing the modality difference and achieve image superresolution (SR) in multiple steps, leading to error-prone image SR results. Also, they lack quantitative evaluation of real-world data. In this article, we present an \n<italic>end-to-end</i>\n framework, called \n<italic>event-guided image (EGI)-SR</i>\n to narrow the modality gap and subtly integrate the event and RGB modality features for effective image SR. Specifically, EGI-SR employs three crossmodality encoders (CME) to learn modality-specific and modality-shared features from the stacked events and the intensity image, respectively. As such, EGI-SR can better mitigate the negative impact of modality varieties and reduce the difference in the feature space between the events and the intensity image. Subsequently, a transformer-based decoder is deployed to reconstruct the SR image. Moreover, we collect a real-world dataset, with temporally and spatially aligned events and color image pairs. We conduct extensive experiments on the synthetic and real-world datasets, showing EGI-SR favorably surpassing the existing methods by a large margin.","PeriodicalId":73305,"journal":{"name":"IEEE transactions on artificial intelligence","volume":"5 10","pages":"5314-5324"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Disentangled Cross-modal Fusion for Event-Guided Image Super-resolution\",\"authors\":\"Minjie Liu;Hongjian Wang;Kuk-Jin Yoon;Lin Wang\",\"doi\":\"10.1109/TAI.2024.3418376\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Event cameras detect the intensity changes and produce asynchronous events with high dynamic range and no motion blur. Recently, several attempts have been made to superresolve the intensity images guided by events. However, these methods directly fuse the event and image features without distinguishing the modality difference and achieve image superresolution (SR) in multiple steps, leading to error-prone image SR results. Also, they lack quantitative evaluation of real-world data. In this article, we present an \\n<italic>end-to-end</i>\\n framework, called \\n<italic>event-guided image (EGI)-SR</i>\\n to narrow the modality gap and subtly integrate the event and RGB modality features for effective image SR. Specifically, EGI-SR employs three crossmodality encoders (CME) to learn modality-specific and modality-shared features from the stacked events and the intensity image, respectively. As such, EGI-SR can better mitigate the negative impact of modality varieties and reduce the difference in the feature space between the events and the intensity image. Subsequently, a transformer-based decoder is deployed to reconstruct the SR image. Moreover, we collect a real-world dataset, with temporally and spatially aligned events and color image pairs. We conduct extensive experiments on the synthetic and real-world datasets, showing EGI-SR favorably surpassing the existing methods by a large margin.\",\"PeriodicalId\":73305,\"journal\":{\"name\":\"IEEE transactions on artificial intelligence\",\"volume\":\"5 10\",\"pages\":\"5314-5324\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on artificial intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10576683/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on artificial intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10576683/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Disentangled Cross-modal Fusion for Event-Guided Image Super-resolution
Event cameras detect the intensity changes and produce asynchronous events with high dynamic range and no motion blur. Recently, several attempts have been made to superresolve the intensity images guided by events. However, these methods directly fuse the event and image features without distinguishing the modality difference and achieve image superresolution (SR) in multiple steps, leading to error-prone image SR results. Also, they lack quantitative evaluation of real-world data. In this article, we present an
end-to-end
framework, called
event-guided image (EGI)-SR
to narrow the modality gap and subtly integrate the event and RGB modality features for effective image SR. Specifically, EGI-SR employs three crossmodality encoders (CME) to learn modality-specific and modality-shared features from the stacked events and the intensity image, respectively. As such, EGI-SR can better mitigate the negative impact of modality varieties and reduce the difference in the feature space between the events and the intensity image. Subsequently, a transformer-based decoder is deployed to reconstruct the SR image. Moreover, we collect a real-world dataset, with temporally and spatially aligned events and color image pairs. We conduct extensive experiments on the synthetic and real-world datasets, showing EGI-SR favorably surpassing the existing methods by a large margin.