{"title":"UnderwaterImage2IR: Underwater impulse response generation via dual-path pre-trained networks and conditional generative adversarial networks","authors":"Yisheng Zhang, Shiguang Liu","doi":"10.1002/cav.2243","DOIUrl":null,"url":null,"abstract":"<p>In the field of acoustic simulation, methods that are widely applied and have been proven to be highly effective rely on accurately capturing the impulse response (IR) and its convolution relationship. This article introduces a novel approach, named as UnderwaterImage2IR, that generates acoustic IRs from underwater images using dual-path pre-trained networks. This technique aims to achieve cross-modal conversion from underwater visual images to acoustic information with high accuracy at a low cost. Our method utilizes deep learning technology by integrating dual-path pre-trained networks and conditional generative adversarial networks conditional generative adversarial networks (CGANs) to generate acoustic IRs that match the observed scenes. One branch of the network focuses on the extraction of spatial features from images, while the other is dedicated to recognizing underwater characteristics. These features are fed into the CGAN network, which is trained to generate acoustic IRs corresponding to the observed scenes, thereby achieving high-accuracy acoustic simulation in an efficient manner. Experimental results, compared with the ground truth and evaluated by human experts, demonstrate the significant advantages of our method in generating underwater acoustic IRs, further proving its potential application in underwater acoustic simulation.</p>","PeriodicalId":50645,"journal":{"name":"Computer Animation and Virtual Worlds","volume":"35 3","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Animation and Virtual Worlds","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cav.2243","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
In the field of acoustic simulation, methods that are widely applied and have been proven to be highly effective rely on accurately capturing the impulse response (IR) and its convolution relationship. This article introduces a novel approach, named as UnderwaterImage2IR, that generates acoustic IRs from underwater images using dual-path pre-trained networks. This technique aims to achieve cross-modal conversion from underwater visual images to acoustic information with high accuracy at a low cost. Our method utilizes deep learning technology by integrating dual-path pre-trained networks and conditional generative adversarial networks conditional generative adversarial networks (CGANs) to generate acoustic IRs that match the observed scenes. One branch of the network focuses on the extraction of spatial features from images, while the other is dedicated to recognizing underwater characteristics. These features are fed into the CGAN network, which is trained to generate acoustic IRs corresponding to the observed scenes, thereby achieving high-accuracy acoustic simulation in an efficient manner. Experimental results, compared with the ground truth and evaluated by human experts, demonstrate the significant advantages of our method in generating underwater acoustic IRs, further proving its potential application in underwater acoustic simulation.
期刊介绍:
With the advent of very powerful PCs and high-end graphics cards, there has been an incredible development in Virtual Worlds, real-time computer animation and simulation, games. But at the same time, new and cheaper Virtual Reality devices have appeared allowing an interaction with these real-time Virtual Worlds and even with real worlds through Augmented Reality. Three-dimensional characters, especially Virtual Humans are now of an exceptional quality, which allows to use them in the movie industry. But this is only a beginning, as with the development of Artificial Intelligence and Agent technology, these characters will become more and more autonomous and even intelligent. They will inhabit the Virtual Worlds in a Virtual Life together with animals and plants.