{"title":"声学几何标注语义材料分类的纹理超像素方法","authors":"M. Colombo, Alan Dolhasz, Carlo Harvey","doi":"10.1145/3411763.3451657","DOIUrl":null,"url":null,"abstract":"The current state of audio rendering algorithms allows efficient sound propagation, reflecting realistic acoustic properties of real environments. Among factors affecting realism of acoustic simulations is the mapping between an environment’s geometry, and acoustic information of materials represented. We present a pipeline to infer material characteristics from their visual representations, providing an automated mapping. A trained image classifier estimates semantic material information from textured meshes mapping predicted labels to a database of measured frequency-dependent absorption coefficients; trained on a material image patches generated from superpixels, it produces inference from meshes, decomposing their unwrapped textures. The most frequent label from predicted texture patches determines the acoustic material assigned to the input mesh. We test the pipeline on a real environment, capturing a conference room and reconstructing its geometry from point cloud data. We estimate a Room Impulse Response (RIR) of the virtual environment, which we compare against a measured counterpart.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"111 4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A Texture Superpixel Approach to Semantic Material Classification for Acoustic Geometry Tagging\",\"authors\":\"M. Colombo, Alan Dolhasz, Carlo Harvey\",\"doi\":\"10.1145/3411763.3451657\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The current state of audio rendering algorithms allows efficient sound propagation, reflecting realistic acoustic properties of real environments. Among factors affecting realism of acoustic simulations is the mapping between an environment’s geometry, and acoustic information of materials represented. We present a pipeline to infer material characteristics from their visual representations, providing an automated mapping. A trained image classifier estimates semantic material information from textured meshes mapping predicted labels to a database of measured frequency-dependent absorption coefficients; trained on a material image patches generated from superpixels, it produces inference from meshes, decomposing their unwrapped textures. The most frequent label from predicted texture patches determines the acoustic material assigned to the input mesh. We test the pipeline on a real environment, capturing a conference room and reconstructing its geometry from point cloud data. We estimate a Room Impulse Response (RIR) of the virtual environment, which we compare against a measured counterpart.\",\"PeriodicalId\":265192,\"journal\":{\"name\":\"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems\",\"volume\":\"111 4 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3411763.3451657\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3411763.3451657","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Texture Superpixel Approach to Semantic Material Classification for Acoustic Geometry Tagging
The current state of audio rendering algorithms allows efficient sound propagation, reflecting realistic acoustic properties of real environments. Among factors affecting realism of acoustic simulations is the mapping between an environment’s geometry, and acoustic information of materials represented. We present a pipeline to infer material characteristics from their visual representations, providing an automated mapping. A trained image classifier estimates semantic material information from textured meshes mapping predicted labels to a database of measured frequency-dependent absorption coefficients; trained on a material image patches generated from superpixels, it produces inference from meshes, decomposing their unwrapped textures. The most frequent label from predicted texture patches determines the acoustic material assigned to the input mesh. We test the pipeline on a real environment, capturing a conference room and reconstructing its geometry from point cloud data. We estimate a Room Impulse Response (RIR) of the virtual environment, which we compare against a measured counterpart.