{"title":"基于主动视觉的物理似是而非的支持关系提取","authors":"Markus Grotz, D. Sippel, T. Asfour","doi":"10.1109/Humanoids43949.2019.9035018","DOIUrl":null,"url":null,"abstract":"Robots manipulating objects in cluttered scenes require a semantic scene understanding, which describes objects and their relations. Knowledge about physically plausible support relations among objects in such scenes is key for action execution. Due to occlusions, however, support relations often cannot be reliably inferred from a single view only. In this work, we present an active vision system that mitigates occlusion, and explores the scene for object support relations. We extend our previous work in which physically plausible support relations are extracted based on geometric primitives. The active vision system generates view candidates based on existing support relations among the objects, and selects the next best view. We evaluate our approach in simulation, as well as on the humanoid robot ARMAR-6, and show that the active vision system improves the semantic scene model by extracting physically plausible support relations from multiple views.","PeriodicalId":404758,"journal":{"name":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Active Vision for Extraction of Physically Plausible Support Relations\",\"authors\":\"Markus Grotz, D. Sippel, T. Asfour\",\"doi\":\"10.1109/Humanoids43949.2019.9035018\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Robots manipulating objects in cluttered scenes require a semantic scene understanding, which describes objects and their relations. Knowledge about physically plausible support relations among objects in such scenes is key for action execution. Due to occlusions, however, support relations often cannot be reliably inferred from a single view only. In this work, we present an active vision system that mitigates occlusion, and explores the scene for object support relations. We extend our previous work in which physically plausible support relations are extracted based on geometric primitives. The active vision system generates view candidates based on existing support relations among the objects, and selects the next best view. We evaluate our approach in simulation, as well as on the humanoid robot ARMAR-6, and show that the active vision system improves the semantic scene model by extracting physically plausible support relations from multiple views.\",\"PeriodicalId\":404758,\"journal\":{\"name\":\"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Humanoids43949.2019.9035018\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE-RAS 19th International Conference on Humanoid Robots (Humanoids)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Humanoids43949.2019.9035018","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Active Vision for Extraction of Physically Plausible Support Relations
Robots manipulating objects in cluttered scenes require a semantic scene understanding, which describes objects and their relations. Knowledge about physically plausible support relations among objects in such scenes is key for action execution. Due to occlusions, however, support relations often cannot be reliably inferred from a single view only. In this work, we present an active vision system that mitigates occlusion, and explores the scene for object support relations. We extend our previous work in which physically plausible support relations are extracted based on geometric primitives. The active vision system generates view candidates based on existing support relations among the objects, and selects the next best view. We evaluate our approach in simulation, as well as on the humanoid robot ARMAR-6, and show that the active vision system improves the semantic scene model by extracting physically plausible support relations from multiple views.