Y. Mizuchi, Y. Hagiwara, Akimasa Suzuki, H. Imamura, Yongwoon Choi
{"title":"基于特征点对手指运动鲁棒性的单目三维手掌姿态估计","authors":"Y. Mizuchi, Y. Hagiwara, Akimasa Suzuki, H. Imamura, Yongwoon Choi","doi":"10.1109/ICCAS.2013.6704065","DOIUrl":null,"url":null,"abstract":"The usability of wearable augmented reality (AR) systems would improve by letting users arbitrarily display virtual information on their palm and simultaneously manipulate it as tablet computers or smartphones. To realize such interaction between users and virtual information, we aim to robustly estimate 3-D palm posture against finger motion. This is based on the assumption that finger motion is separately estimated from palm posture and applied to manipulation of displayed information. In addition, the capability of electric sources, sensors, and processors are very limited in wearable computers. For this reason, by using a monocular camera and estimating palm posture from only a few image feature-points, we achieve an efficient estimation that satisfies real-time constraint on wearable computers. The accuracy and the robustness of our method are demonstrated by qualitative and quantitative comparison with a widely-used cardboard maker. Additionally, we confirmed that our method is run on a mobile computer at the average of 12.44 msec per frame.","PeriodicalId":415263,"journal":{"name":"2013 13th International Conference on Control, Automation and Systems (ICCAS 2013)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Monocular 3D palm posture estimation based on feature-points robust against finger motion\",\"authors\":\"Y. Mizuchi, Y. Hagiwara, Akimasa Suzuki, H. Imamura, Yongwoon Choi\",\"doi\":\"10.1109/ICCAS.2013.6704065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The usability of wearable augmented reality (AR) systems would improve by letting users arbitrarily display virtual information on their palm and simultaneously manipulate it as tablet computers or smartphones. To realize such interaction between users and virtual information, we aim to robustly estimate 3-D palm posture against finger motion. This is based on the assumption that finger motion is separately estimated from palm posture and applied to manipulation of displayed information. In addition, the capability of electric sources, sensors, and processors are very limited in wearable computers. For this reason, by using a monocular camera and estimating palm posture from only a few image feature-points, we achieve an efficient estimation that satisfies real-time constraint on wearable computers. The accuracy and the robustness of our method are demonstrated by qualitative and quantitative comparison with a widely-used cardboard maker. Additionally, we confirmed that our method is run on a mobile computer at the average of 12.44 msec per frame.\",\"PeriodicalId\":415263,\"journal\":{\"name\":\"2013 13th International Conference on Control, Automation and Systems (ICCAS 2013)\",\"volume\":\"102 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 13th International Conference on Control, Automation and Systems (ICCAS 2013)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCAS.2013.6704065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 13th International Conference on Control, Automation and Systems (ICCAS 2013)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCAS.2013.6704065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Monocular 3D palm posture estimation based on feature-points robust against finger motion
The usability of wearable augmented reality (AR) systems would improve by letting users arbitrarily display virtual information on their palm and simultaneously manipulate it as tablet computers or smartphones. To realize such interaction between users and virtual information, we aim to robustly estimate 3-D palm posture against finger motion. This is based on the assumption that finger motion is separately estimated from palm posture and applied to manipulation of displayed information. In addition, the capability of electric sources, sensors, and processors are very limited in wearable computers. For this reason, by using a monocular camera and estimating palm posture from only a few image feature-points, we achieve an efficient estimation that satisfies real-time constraint on wearable computers. The accuracy and the robustness of our method are demonstrated by qualitative and quantitative comparison with a widely-used cardboard maker. Additionally, we confirmed that our method is run on a mobile computer at the average of 12.44 msec per frame.