{"title":"Relevant Perception Modalities for Flexible Human-Robot Teams","authors":"Nico Höllerich, D. Henrich","doi":"10.1109/RO-MAN47096.2020.9223593","DOIUrl":null,"url":null,"abstract":"Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.","PeriodicalId":383722,"journal":{"name":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/RO-MAN47096.2020.9223593","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Robust and reliable perception plays an important role when humans engage into cooperation with robots in industrial or household settings. Various explicit and implicit communication modalities and perception methods can be used to recognize expressed intentions. Depending on the modality, different sensors, areas of observation, and perception methods need to be utilized. More modalities increase the complexity and costs of the setup. We consider the scenario of a cooperative task in a potentially noisy environment, where verbal communication is hardly feasible. Our goal is to investigate the importance of different, non-verbal communication modalities for intention recognition. To this end, we build upon an established benchmark study for human cooperation and investigate which input modalities contribute most towards recognizing the expressed intention. To measure the detection rate, we conducted a second study. Participants had to predict actions based on a stream of symbolic input data. Findings confirm the existence of a common gesture dictionary and the importance of hand tracking for action prediction when the number of feasible actions increases. The contribution of this work is a usage ranking of gestures and a comparison of input modalities to improve prediction capabilities in human-robot cooperation.