{"title":"Assistive robotic arm autonomously bringing a cup to the mouth by face recognition","authors":"Hideyuki Tanaka, Y. Sumi, Y. Matsumoto","doi":"10.1109/ARSO.2010.5679633","DOIUrl":null,"url":null,"abstract":"We developed an assistive-robotic-arm system which autonomously grasps a cup and brings it to the user's mouth. It was developed as a prototype of meal-assistance robot. We utilized two heterogeneous eye-in-hand cameras. One is the front-camera capturing objects, and the other is the side-camera capturing the user's face. The latter keeps an occlusion-free view even during the object bringing. We implemented a face recognition function which robustly identifies the user's face while predicting the face position. The arm is controlled by visual servoing technique. We verified the basic performance of the system through preliminary tests. The arm was able to execute the task, controlling the arm according to the position of the object and the user's face. We demonstrated the basic possibility for the meal-assistance robot.","PeriodicalId":164753,"journal":{"name":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 IEEE Workshop on Advanced Robotics and its Social Impacts","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARSO.2010.5679633","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
We developed an assistive-robotic-arm system which autonomously grasps a cup and brings it to the user's mouth. It was developed as a prototype of meal-assistance robot. We utilized two heterogeneous eye-in-hand cameras. One is the front-camera capturing objects, and the other is the side-camera capturing the user's face. The latter keeps an occlusion-free view even during the object bringing. We implemented a face recognition function which robustly identifies the user's face while predicting the face position. The arm is controlled by visual servoing technique. We verified the basic performance of the system through preliminary tests. The arm was able to execute the task, controlling the arm according to the position of the object and the user's face. We demonstrated the basic possibility for the meal-assistance robot.