Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145869
P. Einramhof, Robert Schwarz, M. Vincze
Human vision is the reference when designing perception systems for cognitive service robots, especially its ability to quickly identify task-relevant regions in a scene and to foveate on these regions. An adaptive 3D camera currently under development aims at mimicking these properties for endowing service robots with a higher level of perception and interaction capabilities with respect to everyday objects and environments. A scene is coarsely scanned and analyzed. Based on the result of analysis and the task, relevant regions within the scene are identified and data acquisition is concentrated on details of interest allowing for higher resolution 3D sampling of these details. To set the stage we first briefly describe the sensor hardware and focus then on the analysis of range images captured by the hardware. Two approaches - one based on saliency maps and the other on range image segmentation - and preliminary results are presented.
{"title":"Range image analysis for controlling an adaptive 3D camera","authors":"P. Einramhof, Robert Schwarz, M. Vincze","doi":"10.1109/URAI.2011.6145869","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145869","url":null,"abstract":"Human vision is the reference when designing perception systems for cognitive service robots, especially its ability to quickly identify task-relevant regions in a scene and to foveate on these regions. An adaptive 3D camera currently under development aims at mimicking these properties for endowing service robots with a higher level of perception and interaction capabilities with respect to everyday objects and environments. A scene is coarsely scanned and analyzed. Based on the result of analysis and the task, relevant regions within the scene are identified and data acquisition is concentrated on details of interest allowing for higher resolution 3D sampling of these details. To set the stage we first briefly describe the sensor hardware and focus then on the analysis of range images captured by the hardware. Two approaches - one based on saliency maps and the other on range image segmentation - and preliminary results are presented.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124393111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145954
Seohyun Jeon, Minsu Jang, Seunghwan Park, Daeha Lee, Young-Jo Cho, Jaehong Kim
Multi-robot cooperation promises increased performance and fault-tolerance in large-scale environments. To achieve the goal with these features, dynamic task allocation algorithm is required to adapt changing environment. Whereas previous researches mostly have been done for simplifying the mission and emphasizing the task allocation algorithm, this paper addresses the needs to analyze the mission more concretely and introduces a strategy for the task allocation problem: off-line and on-line analyses. The off-line analysis takes the robot-working scenario into account statically before deploying the robots. The on-line analysis considers the real time task allocation algorithm while running the robots. Combining these two analyses, efficient task allocation is achieved with minimum cost.
{"title":"Task allocation strategy of heterogeneous multi-robot for indoor surveillance","authors":"Seohyun Jeon, Minsu Jang, Seunghwan Park, Daeha Lee, Young-Jo Cho, Jaehong Kim","doi":"10.1109/URAI.2011.6145954","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145954","url":null,"abstract":"Multi-robot cooperation promises increased performance and fault-tolerance in large-scale environments. To achieve the goal with these features, dynamic task allocation algorithm is required to adapt changing environment. Whereas previous researches mostly have been done for simplifying the mission and emphasizing the task allocation algorithm, this paper addresses the needs to analyze the mission more concretely and introduces a strategy for the task allocation problem: off-line and on-line analyses. The off-line analysis takes the robot-working scenario into account statically before deploying the robots. The on-line analysis considers the real time task allocation algorithm while running the robots. Combining these two analyses, efficient task allocation is achieved with minimum cost.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121242401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145980
Wei Wang, D. Brscic, Zhiwei He, S. Hirche, K. Kühnlenz
Real time human body motion estimation plays an important role in the perception for robotics nowadays, especially for the applications of human robot interaction and service robotics. In this paper, we propose a method for real-time 3D human body motion estimation based on 3-layer laser scans. All the useful scanned points, presenting the human body contour information, are subtracted from the learned background of the environment. For human contour feature extraction, in order to avoid the situations of unsuccessful segmentation, we propose a novel iterative template matching algorithm for clustering, where the templates of torso and hip sections are modeled with different radii. Robust distinct human motion features are extracted using maximum likelihood estimation and nearest neighbor clustering method. Subsequently, the positions of human joints in 3D space are retrieved by associating the extracted features with a pre-defined articulated model of human body. Finally we demonstrate our proposed methods through experiments, which show accurate human body motion tracking in real time.
{"title":"Real-time human body motion estimation based on multi-layer laser scans","authors":"Wei Wang, D. Brscic, Zhiwei He, S. Hirche, K. Kühnlenz","doi":"10.1109/URAI.2011.6145980","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145980","url":null,"abstract":"Real time human body motion estimation plays an important role in the perception for robotics nowadays, especially for the applications of human robot interaction and service robotics. In this paper, we propose a method for real-time 3D human body motion estimation based on 3-layer laser scans. All the useful scanned points, presenting the human body contour information, are subtracted from the learned background of the environment. For human contour feature extraction, in order to avoid the situations of unsuccessful segmentation, we propose a novel iterative template matching algorithm for clustering, where the templates of torso and hip sections are modeled with different radii. Robust distinct human motion features are extracted using maximum likelihood estimation and nearest neighbor clustering method. Subsequently, the positions of human joints in 3D space are retrieved by associating the extracted features with a pre-defined articulated model of human body. Finally we demonstrate our proposed methods through experiments, which show accurate human body motion tracking in real time.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128995595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145884
S. Ge, J. Cabibihan, Zhengchen Zhang, Yanan Li, Cai Meng, Hongsheng He, M. Safizadeh, Y. Li, J. Yang
In this paper, we present the design of a social robot, Nancy, which is developed as a platform for engaging social interaction. Targeting for a social, safe, interactive and user-friendly robot mate, the design philosophy of Nancy is presented with mechanical, electrical, artificial skin and software specifications. In particular, there are 32 degrees of freedom (DOFs) through the whole body of Nancy, and the social intelligence is implemented based on vision, audio and control subsystems.
{"title":"Design and development of Nancy, a social robot","authors":"S. Ge, J. Cabibihan, Zhengchen Zhang, Yanan Li, Cai Meng, Hongsheng He, M. Safizadeh, Y. Li, J. Yang","doi":"10.1109/URAI.2011.6145884","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145884","url":null,"abstract":"In this paper, we present the design of a social robot, Nancy, which is developed as a platform for engaging social interaction. Targeting for a social, safe, interactive and user-friendly robot mate, the design philosophy of Nancy is presented with mechanical, electrical, artificial skin and software specifications. In particular, there are 32 degrees of freedom (DOFs) through the whole body of Nancy, and the social intelligence is implemented based on vision, audio and control subsystems.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115964631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6146035
D. Yun, S. Ham, Jung-Ho Park, S. Yun
In this paper, analysis for LVDT has been performed to design and evaluate the performance of the sensor. To do this, finite element method (FEM) is used and parametric analysis is conducted. From the analysis, performance of a LVDT sensor can be investigated precisely before actual manufacturing.
{"title":"Analysis and design of LVDT","authors":"D. Yun, S. Ham, Jung-Ho Park, S. Yun","doi":"10.1109/URAI.2011.6146035","DOIUrl":"https://doi.org/10.1109/URAI.2011.6146035","url":null,"abstract":"In this paper, analysis for LVDT has been performed to design and evaluate the performance of the sensor. To do this, finite element method (FEM) is used and parametric analysis is conducted. From the analysis, performance of a LVDT sensor can be investigated precisely before actual manufacturing.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115305023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145949
Shinya Kawakami, T. Takubo, K. Ohara, Y. Mae, T. Arai
In this paper we propose adding image information to a map in order to create an intuitive interface for exploring unknown environments. On this map, a high-resolution photo is attached to each mapped object. The shooting angle and position for the picture is defined by the required resolution of the image, the camera specifications and the object's shape. The appearance from the desired direction can be confirmed intuitively by referring to the Shooting Vector of the object. To make the proposed map, high quality image information should be acquired on the Shooting Vector. We develop a tool for making the map, and confirm its effectiveness by experiments.
{"title":"Adding image information corresponding to the shape of the objects' surfaces on environmental maps","authors":"Shinya Kawakami, T. Takubo, K. Ohara, Y. Mae, T. Arai","doi":"10.1109/URAI.2011.6145949","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145949","url":null,"abstract":"In this paper we propose adding image information to a map in order to create an intuitive interface for exploring unknown environments. On this map, a high-resolution photo is attached to each mapped object. The shooting angle and position for the picture is defined by the required resolution of the image, the camera specifications and the object's shape. The appearance from the desired direction can be confirmed intuitively by referring to the Shooting Vector of the object. To make the proposed map, high quality image information should be acquired on the Shooting Vector. We develop a tool for making the map, and confirm its effectiveness by experiments.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114742330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145983
B. Cogrel, B. Daachi, Y. Amirat
Context has a crucial importance in the way actions are perceived and done, especially in ubiquitous robotics where context is rich and subject to substantial variations. Given that service selection focuses on the nonfunctional performance of services, it must be tightly related to the context. Unfortunately, as far as we know, previous works have not effectively considered this relation. First, most of the existing selection models rely on Quality of Service (QoS) parameters that have been estimated according to the previous executions. However, two consecutive executions might occur in two very different contexts and then behave differently. Thus, this paper argues that these QoS parameters should be predicted from context. Finally, the aggregation of these QoS parameters into a score reflects the expectations on a service; it should also be context-dependent. In this article, a solution addressing these points is proposed for auxiliary services. Auxiliary services assist another service during its execution, usually by delivering a data stream. Instead of focusing on their individual performances, selection considers their impact on the assisted service. We propose to obtain this model through a multilayer perceptron under batch learning. Thus, focus is given to the sample generation. This model is validated in a ubiquitous robotic scenario involving a localization service selection.
{"title":"Impact-based contextual service selection in a ubiquitous robotic environment","authors":"B. Cogrel, B. Daachi, Y. Amirat","doi":"10.1109/URAI.2011.6145983","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145983","url":null,"abstract":"Context has a crucial importance in the way actions are perceived and done, especially in ubiquitous robotics where context is rich and subject to substantial variations. Given that service selection focuses on the nonfunctional performance of services, it must be tightly related to the context. Unfortunately, as far as we know, previous works have not effectively considered this relation. First, most of the existing selection models rely on Quality of Service (QoS) parameters that have been estimated according to the previous executions. However, two consecutive executions might occur in two very different contexts and then behave differently. Thus, this paper argues that these QoS parameters should be predicted from context. Finally, the aggregation of these QoS parameters into a score reflects the expectations on a service; it should also be context-dependent. In this article, a solution addressing these points is proposed for auxiliary services. Auxiliary services assist another service during its execution, usually by delivering a data stream. Instead of focusing on their individual performances, selection considers their impact on the assisted service. We propose to obtain this model through a multilayer perceptron under batch learning. Thus, focus is given to the sample generation. This model is validated in a ubiquitous robotic scenario involving a localization service selection.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127562211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145929
Min-Ho Kim, Jung-Hun Heo, Yuanlong Wei, Min-Cheol Lee
Path planning problem is the major issue of the control of mobile robot. Among the path planning methods the artificial potential field theory is widely used for mobile robots, because it provides simple and effective motion control input. But sometimes it is hard to detect exact obstacle shape, because some obstacle can move itself, thus we can detect its probability only. So we suggest the path planning method using potential field with the probability information. After that we simulate our algorithm and show the results.
{"title":"A path planning algorithm using artificial potential field based on probability map","authors":"Min-Ho Kim, Jung-Hun Heo, Yuanlong Wei, Min-Cheol Lee","doi":"10.1109/URAI.2011.6145929","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145929","url":null,"abstract":"Path planning problem is the major issue of the control of mobile robot. Among the path planning methods the artificial potential field theory is widely used for mobile robots, because it provides simple and effective motion control input. But sometimes it is hard to detect exact obstacle shape, because some obstacle can move itself, thus we can detect its probability only. So we suggest the path planning method using potential field with the probability information. After that we simulate our algorithm and show the results.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125384454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6146039
Seunghwan Park, Wonpil Yu, Jaeil Cho
A user-oriented service robot, ETRO, is implemented. To satisfy users, it has various parts which can give humanlike services to the spectators. These parts consist of two kinds which are permanent parts and changeable parts. Also it has employed the tele-presence system to show the reactions expected by visitors and to control the robot precisely.
{"title":"User-oriented tele-presence service robot","authors":"Seunghwan Park, Wonpil Yu, Jaeil Cho","doi":"10.1109/URAI.2011.6146039","DOIUrl":"https://doi.org/10.1109/URAI.2011.6146039","url":null,"abstract":"A user-oriented service robot, ETRO, is implemented. To satisfy users, it has various parts which can give humanlike services to the spectators. These parts consist of two kinds which are permanent parts and changeable parts. Also it has employed the tele-presence system to show the reactions expected by visitors and to control the robot precisely.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125926418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-11-01DOI: 10.1109/URAI.2011.6145917
S. Ji, Jae-Seong Han, Sang-Moo Lee, Byung-Wook Choi
In this paper, we suggest an r-learning (robot-learning) system for children who are unfamiliar with educational tools. Our system is characterized with the following features. Firstly, the system provides children with personalized educational instructions considering their learning ability. In our system, learning contents for them are selected according to their learning history extracted from human-robot interaction data. Secondly, our system helps them learn educational contents easily and joyfully through various behavioral interactions with a robot. Third, r-learning scenarios are modeled with Petri-net to handle exception during learning and robot contents can be composed by use of predefined robot behaviors. Finally, we implement these system S/W using OPRoS which is a software platform for robotic services.
{"title":"Development of a robot behavior controller for an r-learning system using OPRoS","authors":"S. Ji, Jae-Seong Han, Sang-Moo Lee, Byung-Wook Choi","doi":"10.1109/URAI.2011.6145917","DOIUrl":"https://doi.org/10.1109/URAI.2011.6145917","url":null,"abstract":"In this paper, we suggest an r-learning (robot-learning) system for children who are unfamiliar with educational tools. Our system is characterized with the following features. Firstly, the system provides children with personalized educational instructions considering their learning ability. In our system, learning contents for them are selected according to their learning history extracted from human-robot interaction data. Secondly, our system helps them learn educational contents easily and joyfully through various behavioral interactions with a robot. Third, r-learning scenarios are modeled with Petri-net to handle exception during learning and robot contents can be composed by use of predefined robot behaviors. Finally, we implement these system S/W using OPRoS which is a software platform for robotic services.","PeriodicalId":385925,"journal":{"name":"2011 8th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128179502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}