Yutaka Kondo, K. Takemura, J. Takamatsu, T. Ogasawara
{"title":"A gesture-centric Android system for multi-party human-robot interaction","authors":"Yutaka Kondo, K. Takemura, J. Takamatsu, T. Ogasawara","doi":"10.5898/JHRI.2.1.Kondo","DOIUrl":null,"url":null,"abstract":"Natural body gesturing and speech dialogue, is crucial for human-robot interaction (HRI) and human-robot symbiosis. Real interaction is not only with one-to-one communication but also among multiple people. We have therefore developed a system that can adjust gestures and facial expressions based on a speaker's location or situation for multi-party communication. By extending our already developed real-time gesture planning method, we propose a gesture adjustment suitable for human demand through motion parameterization and gaze motion planning, which allows communication through eye-to-eye contact. We implemented the proposed motion planning method on an android Actroid-SIT and we proposed to use a Key-Value Store to connect the components of our systems. The Key-Value Store is a high-speed and lightweight dictionary database with parallelism and scalability. We conducted multi-party HRI experiments for 1,662 subjects in total. In our HRI system, over 60% of subjects started speaking to the Actroid, and the residence time of their communication also became longer. In addition, we confirmed our system gave humans a more sophisticated impression of the Actroid.","PeriodicalId":92076,"journal":{"name":"Journal of human-robot interaction","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.5898/JHRI.2.1.Kondo","citationCount":"43","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of human-robot interaction","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5898/JHRI.2.1.Kondo","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 43
Abstract
Natural body gesturing and speech dialogue, is crucial for human-robot interaction (HRI) and human-robot symbiosis. Real interaction is not only with one-to-one communication but also among multiple people. We have therefore developed a system that can adjust gestures and facial expressions based on a speaker's location or situation for multi-party communication. By extending our already developed real-time gesture planning method, we propose a gesture adjustment suitable for human demand through motion parameterization and gaze motion planning, which allows communication through eye-to-eye contact. We implemented the proposed motion planning method on an android Actroid-SIT and we proposed to use a Key-Value Store to connect the components of our systems. The Key-Value Store is a high-speed and lightweight dictionary database with parallelism and scalability. We conducted multi-party HRI experiments for 1,662 subjects in total. In our HRI system, over 60% of subjects started speaking to the Actroid, and the residence time of their communication also became longer. In addition, we confirmed our system gave humans a more sophisticated impression of the Actroid.