Changan Chen, Jordi Ramos, Anshul Tomar, Kristen Grauman
{"title":"Sim2Real Transfer for Audio-Visual Navigation with Frequency-Adaptive Acoustic Field Prediction","authors":"Changan Chen, Jordi Ramos, Anshul Tomar, Kristen Grauman","doi":"arxiv-2405.02821","DOIUrl":null,"url":null,"abstract":"Sim2real transfer has received increasing attention lately due to the success\nof learning robotic tasks in simulation end-to-end. While there has been a lot\nof progress in transferring vision-based navigation policies, the existing\nsim2real strategy for audio-visual navigation performs data augmentation\nempirically without measuring the acoustic gap. The sound differs from light in\nthat it spans across much wider frequencies and thus requires a different\nsolution for sim2real. We propose the first treatment of sim2real for\naudio-visual navigation by disentangling it into acoustic field prediction\n(AFP) and waypoint navigation. We first validate our design choice in the\nSoundSpaces simulator and show improvement on the Continuous AudioGoal\nnavigation benchmark. We then collect real-world data to measure the spectral\ndifference between the simulation and the real world by training AFP models\nthat only take a specific frequency subband as input. We further propose a\nfrequency-adaptive strategy that intelligently selects the best frequency band\nfor prediction based on both the measured spectral difference and the energy\ndistribution of the received audio, which improves the performance on the real\ndata. Lastly, we build a real robot platform and show that the transferred\npolicy can successfully navigate to sounding objects. This work demonstrates\nthe potential of building intelligent agents that can see, hear, and act\nentirely from simulation, and transferring them to the real world.","PeriodicalId":501178,"journal":{"name":"arXiv - CS - Sound","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Sound","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2405.02821","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Sim2real transfer has received increasing attention lately due to the success
of learning robotic tasks in simulation end-to-end. While there has been a lot
of progress in transferring vision-based navigation policies, the existing
sim2real strategy for audio-visual navigation performs data augmentation
empirically without measuring the acoustic gap. The sound differs from light in
that it spans across much wider frequencies and thus requires a different
solution for sim2real. We propose the first treatment of sim2real for
audio-visual navigation by disentangling it into acoustic field prediction
(AFP) and waypoint navigation. We first validate our design choice in the
SoundSpaces simulator and show improvement on the Continuous AudioGoal
navigation benchmark. We then collect real-world data to measure the spectral
difference between the simulation and the real world by training AFP models
that only take a specific frequency subband as input. We further propose a
frequency-adaptive strategy that intelligently selects the best frequency band
for prediction based on both the measured spectral difference and the energy
distribution of the received audio, which improves the performance on the real
data. Lastly, we build a real robot platform and show that the transferred
policy can successfully navigate to sounding objects. This work demonstrates
the potential of building intelligent agents that can see, hear, and act
entirely from simulation, and transferring them to the real world.