Walter Brescia, Giuseppe Roberto, Vito Andrea Racanelli, S. Mascolo, L. D. Cicco
{"title":"Point2Depth:一种基于gan的毫米波点云深度图像转换的对比学习方法","authors":"Walter Brescia, Giuseppe Roberto, Vito Andrea Racanelli, S. Mascolo, L. D. Cicco","doi":"10.1109/MED59994.2023.10185732","DOIUrl":null,"url":null,"abstract":"The perception of the environment is essential in mobile robotics applications as it enables the proper planning and execution of efficient navigation strategies. Optical sensors offer many advantages, ranging from precision to understandability, but they can be significantly impacted by lighting conditions and the composition of the surroundings. In contrast, millimeter wave (mmWave) radar sensors are not influenced by such adverse condition and are capable of detecting partially or fully obstructed obstacles, resulting in more informative point clouds. However, such point clouds are often sparse and noisy. This work presents Point2Depth, a cross-modal contrastive learning approach based on Conditional Generative Adversarial Networks (cGANs) to transform sparse point clouds from mmWave sensors into depth images, preserving the distance information while producing a more comprehensible representation. An extensive data collection phase was conducted to create a rich multimodal dataset with each information associated with a timestamp and a pose. The experimental results demonstrate that the approach is able to produce accurate depth images, even in challenging environmental conditions.","PeriodicalId":270226,"journal":{"name":"2023 31st Mediterranean Conference on Control and Automation (MED)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Point2Depth: a GAN-based Contrastive Learning Approach for mmWave Point Clouds to Depth Images Transformation\",\"authors\":\"Walter Brescia, Giuseppe Roberto, Vito Andrea Racanelli, S. Mascolo, L. D. Cicco\",\"doi\":\"10.1109/MED59994.2023.10185732\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The perception of the environment is essential in mobile robotics applications as it enables the proper planning and execution of efficient navigation strategies. Optical sensors offer many advantages, ranging from precision to understandability, but they can be significantly impacted by lighting conditions and the composition of the surroundings. In contrast, millimeter wave (mmWave) radar sensors are not influenced by such adverse condition and are capable of detecting partially or fully obstructed obstacles, resulting in more informative point clouds. However, such point clouds are often sparse and noisy. This work presents Point2Depth, a cross-modal contrastive learning approach based on Conditional Generative Adversarial Networks (cGANs) to transform sparse point clouds from mmWave sensors into depth images, preserving the distance information while producing a more comprehensible representation. An extensive data collection phase was conducted to create a rich multimodal dataset with each information associated with a timestamp and a pose. The experimental results demonstrate that the approach is able to produce accurate depth images, even in challenging environmental conditions.\",\"PeriodicalId\":270226,\"journal\":{\"name\":\"2023 31st Mediterranean Conference on Control and Automation (MED)\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 31st Mediterranean Conference on Control and Automation (MED)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MED59994.2023.10185732\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 31st Mediterranean Conference on Control and Automation (MED)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MED59994.2023.10185732","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Point2Depth: a GAN-based Contrastive Learning Approach for mmWave Point Clouds to Depth Images Transformation
The perception of the environment is essential in mobile robotics applications as it enables the proper planning and execution of efficient navigation strategies. Optical sensors offer many advantages, ranging from precision to understandability, but they can be significantly impacted by lighting conditions and the composition of the surroundings. In contrast, millimeter wave (mmWave) radar sensors are not influenced by such adverse condition and are capable of detecting partially or fully obstructed obstacles, resulting in more informative point clouds. However, such point clouds are often sparse and noisy. This work presents Point2Depth, a cross-modal contrastive learning approach based on Conditional Generative Adversarial Networks (cGANs) to transform sparse point clouds from mmWave sensors into depth images, preserving the distance information while producing a more comprehensible representation. An extensive data collection phase was conducted to create a rich multimodal dataset with each information associated with a timestamp and a pose. The experimental results demonstrate that the approach is able to produce accurate depth images, even in challenging environmental conditions.