Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251447
A. Spiers, A. Dollar, J. Linden, Maria Oshodi
This paper presents the Haptic Sandwich, a handheld robotic device that designed to provide navigation instructions to pedestrians through a novel shape changing modality. The device resembles a cube with an articulated upper half that is able to rotate and translate (extend) relative to the bottom half, which is grounded in the user's hand. The poses assumed by the device simultaneously correspond to heading and proximity to a navigational target. The Haptic Sandwich provides an alternative to screen and/or audio based navigation technologies for both visually impaired and sighted pedestrians. Unlike many robotic or haptic navigational solutions, the haptic sandwich is discrete and unobtrusive in terms of form and sensory stimulus. Due to the novel nature of the interface, two user studies were undertaken to validate the concept and device. In the first experiment, stationary participants attempted to identify poses assumed by the device, which was hidden from view. 80% of poses were correctly identified and 17.5% had the minimal possible error. Multi-DOF errors accounted for only 1.1% of all responses. Perception accuracy of the rotation and extension DOF was significantly different. In the second study, participants attempted to locate a sequence of invisible navigational targets while walking with the device. Good navigational ability was demonstrated after minimal training. All participants were able to locate all targets, utilizing both DOF. Walking path efficiency was between 32%-56%. In summary, the paper presents the design of a novel shape changing haptic user interface intended to be intuitive and unobtrusive. The interface is then validated by stationary perceptual experiments and an embodied (walking) target finding pilot study.
{"title":"First validation of the Haptic Sandwich: A shape changing handheld haptic navigation aid","authors":"A. Spiers, A. Dollar, J. Linden, Maria Oshodi","doi":"10.1109/ICAR.2015.7251447","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251447","url":null,"abstract":"This paper presents the Haptic Sandwich, a handheld robotic device that designed to provide navigation instructions to pedestrians through a novel shape changing modality. The device resembles a cube with an articulated upper half that is able to rotate and translate (extend) relative to the bottom half, which is grounded in the user's hand. The poses assumed by the device simultaneously correspond to heading and proximity to a navigational target. The Haptic Sandwich provides an alternative to screen and/or audio based navigation technologies for both visually impaired and sighted pedestrians. Unlike many robotic or haptic navigational solutions, the haptic sandwich is discrete and unobtrusive in terms of form and sensory stimulus. Due to the novel nature of the interface, two user studies were undertaken to validate the concept and device. In the first experiment, stationary participants attempted to identify poses assumed by the device, which was hidden from view. 80% of poses were correctly identified and 17.5% had the minimal possible error. Multi-DOF errors accounted for only 1.1% of all responses. Perception accuracy of the rotation and extension DOF was significantly different. In the second study, participants attempted to locate a sequence of invisible navigational targets while walking with the device. Good navigational ability was demonstrated after minimal training. All participants were able to locate all targets, utilizing both DOF. Walking path efficiency was between 32%-56%. In summary, the paper presents the design of a novel shape changing haptic user interface intended to be intuitive and unobtrusive. The interface is then validated by stationary perceptual experiments and an embodied (walking) target finding pilot study.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133536764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251476
Christian Mandery, Ömer Terlemez, Martin Do, N. Vahrenkamp, T. Asfour
We present a large-scale whole-body human motion database consisting of captured raw motion data as well as the corresponding post-processed motions. This database serves as a key element for a wide variety of research questions related e.g. to human motion analysis, imitation learning, action recognition and motion generation in robotics. In contrast to previous approaches, the motion data in our database considers the motions of the observed human subject as well as the objects with which the subject is interacting. The information about human-object relations is crucial for the proper understanding of human actions and their goal-directed reproduction on a robot. To facilitate the creation and processing of human motion data, we propose procedures and techniques for capturing of motion, labeling and organization of the motion capture data based on a Motion Description Tree, as well as for the normalization of human motion to an unified representation based on a reference model of the human body. We provide software tools and interfaces to the database allowing access and efficient search with the proposed motion representation.
{"title":"The KIT whole-body human motion database","authors":"Christian Mandery, Ömer Terlemez, Martin Do, N. Vahrenkamp, T. Asfour","doi":"10.1109/ICAR.2015.7251476","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251476","url":null,"abstract":"We present a large-scale whole-body human motion database consisting of captured raw motion data as well as the corresponding post-processed motions. This database serves as a key element for a wide variety of research questions related e.g. to human motion analysis, imitation learning, action recognition and motion generation in robotics. In contrast to previous approaches, the motion data in our database considers the motions of the observed human subject as well as the objects with which the subject is interacting. The information about human-object relations is crucial for the proper understanding of human actions and their goal-directed reproduction on a robot. To facilitate the creation and processing of human motion data, we propose procedures and techniques for capturing of motion, labeling and organization of the motion capture data based on a Motion Description Tree, as well as for the normalization of human motion to an unified representation based on a reference model of the human body. We provide software tools and interfaces to the database allowing access and efficient search with the proposed motion representation.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"16 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113974589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251515
Radouane Ait Jellal, A. Zell
We propose a fast algorithm for computing stereo correspondences and correcting the mismatches. The correspondences are computed using stereo block matching and refined with a depth-aware method. We compute 16 disparities at the same time using SSE instructions. We evaluated our method on the Middlebury benchmark and obtained promosing results for practical realtime applications. The use of SSE instructions allows us to reduce the time needed to process the Tsukuba stereo pair to 8 milliseconds (125 fps) on a Core i5 CPU with 2×3.3 GHz. Our disparity refinement method has corrected 40% of the wrong matches with an additional computational time of 5.2% (0.41ms). The algorithm has been used to build 3D occupancy grid maps from stereo images. We used the datasets provided by the EuRoC Robotic Challenge. The reconstruction was accurate enough to perform realtime safe navigation.
{"title":"A fast dense stereo matching algorithm with an application to 3D occupancy mapping using quadrocopters","authors":"Radouane Ait Jellal, A. Zell","doi":"10.1109/ICAR.2015.7251515","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251515","url":null,"abstract":"We propose a fast algorithm for computing stereo correspondences and correcting the mismatches. The correspondences are computed using stereo block matching and refined with a depth-aware method. We compute 16 disparities at the same time using SSE instructions. We evaluated our method on the Middlebury benchmark and obtained promosing results for practical realtime applications. The use of SSE instructions allows us to reduce the time needed to process the Tsukuba stereo pair to 8 milliseconds (125 fps) on a Core i5 CPU with 2×3.3 GHz. Our disparity refinement method has corrected 40% of the wrong matches with an additional computational time of 5.2% (0.41ms). The algorithm has been used to build 3D occupancy grid maps from stereo images. We used the datasets provided by the EuRoC Robotic Challenge. The reconstruction was accurate enough to perform realtime safe navigation.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114891142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251465
H. Çelikkanat, E. Sahin, Sinan Kalkan
In this paper, we study the learning and representation of grounded spatial concepts in a probabilistic concept web that connects them with other noun, adjective, and verb concepts. Specifically, we focus on the prepositional spatial concepts, such as “on”, “below”, “left”, “right”, “in front of” and “behind”. In our prior work (Celikkanat et al., 2015), inspired from the distributed highly-connected conceptual representation in human brain, we proposed using Markov Random Field for modeling a concept web on a humanoid robot. For adequately expressing the unidirectional (i.e., non-symmetric) nature of the spatial propositions, in this work, we propose a extension of the Markov Random Field into a simple hybrid Markov Random Field model, allowing both undirected and directed connections between concepts. We demonstrate that our humanoid robot, iCub, is able to (i) extract meaningful spatial concepts in addition to noun, adjective and verb concepts from a scene using the proposed model, (ii) correct wrong initial predictions using the connectedness of the concept web, and (iii) respond correctly to queries involving spatial concepts, such as ball-left-of-the-cup.
在本文中,我们研究了一个概率概念网络中基于空间概念的学习和表示,该网络将它们与其他名词、形容词和动词概念联系起来。具体来说,我们重点研究了介词空间概念,如“上”、“下”、“左”、“右”、“前”和“后”。在我们之前的工作(Celikkanat et al., 2015)中,受人脑中分布式高连接概念表示的启发,我们提出使用马尔科夫随机场(Markov Random Field)在人形机器人上建模概念网。为了充分表达空间命题的单向(即非对称)性质,在这项工作中,我们提出将马尔可夫随机场扩展为一个简单的混合马尔可夫随机场模型,允许概念之间的无向和有向连接。我们证明了我们的类人机器人iCub能够(i)使用所提出的模型从场景中提取除名词、形容词和动词概念之外的有意义的空间概念,(ii)使用概念网络的连通性纠正错误的初始预测,以及(iii)正确响应涉及空间概念的查询,例如球在杯子左边。
{"title":"Integrating spatial concepts into a probabilistic concept web","authors":"H. Çelikkanat, E. Sahin, Sinan Kalkan","doi":"10.1109/ICAR.2015.7251465","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251465","url":null,"abstract":"In this paper, we study the learning and representation of grounded spatial concepts in a probabilistic concept web that connects them with other noun, adjective, and verb concepts. Specifically, we focus on the prepositional spatial concepts, such as “on”, “below”, “left”, “right”, “in front of” and “behind”. In our prior work (Celikkanat et al., 2015), inspired from the distributed highly-connected conceptual representation in human brain, we proposed using Markov Random Field for modeling a concept web on a humanoid robot. For adequately expressing the unidirectional (i.e., non-symmetric) nature of the spatial propositions, in this work, we propose a extension of the Markov Random Field into a simple hybrid Markov Random Field model, allowing both undirected and directed connections between concepts. We demonstrate that our humanoid robot, iCub, is able to (i) extract meaningful spatial concepts in addition to noun, adjective and verb concepts from a scene using the proposed model, (ii) correct wrong initial predictions using the connectedness of the concept web, and (iii) respond correctly to queries involving spatial concepts, such as ball-left-of-the-cup.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133273591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251450
A. Bilgin, Esra Kadioglu Urtis
The game of pursuit-evasion has always been a popular research subject in the field of robotics. Reinforcement learning, which employs an agent's interaction with the environment, is a method widely used in pursuit-evasion domain. In this paper, a research is conducted on multi-agent pursuit-evasion problem using reinforcement learning and the experimental results are shown. The intelligent agents use Watkins's Q(λ)-learning algorithm to learn from their interactions. Q-learning is an off-policy temporal difference control algorithm. The method we utilize on the other hand, is a unified version of Q-learning and eligibility traces. It uses backup information until the first occurrence of an exploration. In our work, concurrent learning is adopted for the pursuit team. In this approach, each member of the team has got its own action-value function and updates its information space independently.
{"title":"An approach to multi-agent pursuit evasion games using reinforcement learning","authors":"A. Bilgin, Esra Kadioglu Urtis","doi":"10.1109/ICAR.2015.7251450","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251450","url":null,"abstract":"The game of pursuit-evasion has always been a popular research subject in the field of robotics. Reinforcement learning, which employs an agent's interaction with the environment, is a method widely used in pursuit-evasion domain. In this paper, a research is conducted on multi-agent pursuit-evasion problem using reinforcement learning and the experimental results are shown. The intelligent agents use Watkins's Q(λ)-learning algorithm to learn from their interactions. Q-learning is an off-policy temporal difference control algorithm. The method we utilize on the other hand, is a unified version of Q-learning and eligibility traces. It uses backup information until the first occurrence of an exploration. In our work, concurrent learning is adopted for the pursuit team. In this approach, each member of the team has got its own action-value function and updates its information space independently.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130568489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251438
M. Ragaglia, A. Zanchettin, P. Rocco
Planning and control of an industrial manipulator for safe Human-Robot Collaboration (HRC) is a difficult task because of two conflicting requirements: ensuring the worker's safety and completing the task assigned to the robot. This paper proposes a trajectory scaling algorithm for safe HRC that relies on real-time prediction of human occupancy. Knowing the space that the human will occupy within the robot stopping time, the controller can scale the manipulator's velocity allowing safe HRC and avoiding task interruption. Finally, experimental results are presented and discussed.
{"title":"Safety-aware trajectory scaling for Human-Robot Collaboration with prediction of human occupancy","authors":"M. Ragaglia, A. Zanchettin, P. Rocco","doi":"10.1109/ICAR.2015.7251438","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251438","url":null,"abstract":"Planning and control of an industrial manipulator for safe Human-Robot Collaboration (HRC) is a difficult task because of two conflicting requirements: ensuring the worker's safety and completing the task assigned to the robot. This paper proposes a trajectory scaling algorithm for safe HRC that relies on real-time prediction of human occupancy. Knowing the space that the human will occupy within the robot stopping time, the controller can scale the manipulator's velocity allowing safe HRC and avoiding task interruption. Finally, experimental results are presented and discussed.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124851321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251519
R. Tiar, M. Lakrouf, O. Azouaoui
This paper describes the implementation of a local ICP-SLAM (Iterative Closest Point - Simultaneous Localization and Mapping) to improve the method presented in [1] to become faster. The ICP algorithm is known as a method that requires more computation time when the environment grows leading to poor results for both localization and mapping. Therefore, the ICP-SLAM is not recommended to use in real time for large environments. To overcome this problem, a local ICP-SLAM is introduced which is based on the partition of the environment on smaller parts. This method is implemented and tested on the car-like mobile robot “Robucar”. It allows the optimization of the computation time and localization accuracy. The experimental results show the effectiveness of the proposed local ICP-SLAM compared to the method in [1].
{"title":"Fast ICP-SLAM for a bi-steerable mobile robot in large environments","authors":"R. Tiar, M. Lakrouf, O. Azouaoui","doi":"10.1109/ICAR.2015.7251519","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251519","url":null,"abstract":"This paper describes the implementation of a local ICP-SLAM (Iterative Closest Point - Simultaneous Localization and Mapping) to improve the method presented in [1] to become faster. The ICP algorithm is known as a method that requires more computation time when the environment grows leading to poor results for both localization and mapping. Therefore, the ICP-SLAM is not recommended to use in real time for large environments. To overcome this problem, a local ICP-SLAM is introduced which is based on the partition of the environment on smaller parts. This method is implemented and tested on the car-like mobile robot “Robucar”. It allows the optimization of the computation time and localization accuracy. The experimental results show the effectiveness of the proposed local ICP-SLAM compared to the method in [1].","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125547440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251499
Olga Mur, M. Frigola, A. Casals
In this paper, we propose a new approach to domestic action recognition based on a set of features which describe the relation between poses and movements of both hands. These features represent a set of basic actions in a kitchen in terms of the mimics of the hand movements, without needing information of the objects present in the scene. They address specifically the intra-class dissimilarity problem, which occurs when the same action is performed in different ways. The goal is to create a generic methodology that enables a robotic assistant system to recognize actions related to daily life activities and then, be endowed with a proactive behavior. The proposed system uses depth and color data acquired from a Kinect-style sensor and a hand tracking system. We analyze the relevance of the proposed hand-based features using a state-space search approach. Finally, we show the effectiveness of our action recognition approach using our own dataset.
{"title":"Modelling daily actions through hand-based spatio-temporal features","authors":"Olga Mur, M. Frigola, A. Casals","doi":"10.1109/ICAR.2015.7251499","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251499","url":null,"abstract":"In this paper, we propose a new approach to domestic action recognition based on a set of features which describe the relation between poses and movements of both hands. These features represent a set of basic actions in a kitchen in terms of the mimics of the hand movements, without needing information of the objects present in the scene. They address specifically the intra-class dissimilarity problem, which occurs when the same action is performed in different ways. The goal is to create a generic methodology that enables a robotic assistant system to recognize actions related to daily life activities and then, be endowed with a proactive behavior. The proposed system uses depth and color data acquired from a Kinect-style sensor and a hand tracking system. We analyze the relevance of the proposed hand-based features using a state-space search approach. Finally, we show the effectiveness of our action recognition approach using our own dataset.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129065220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251507
Ahmed Hassanein, M. Elhawary, Nour Jaber, Mohammed El-Abd
The field of firefighting has long been a dangerous one, and there have been numerous and devastating losses because of a lack in technological advancement. Additionally, the current methods applied in firefighting are inadequate and inefficient relying heavily on humans who are prone to error, no matter how extensively they have been trained. A recent trend that has become popular is to use robots instead of humans to handle fire hazards. This is mainly because they can be used in situations that are too dangerous for any individual to involve themselves in. In our project, we develop a robot that is able to locate and extinguish fire in a given environment. The robot navigates the arena and avoids any obstacles it faces in its excursion.
{"title":"An autonomous firefighting robot","authors":"Ahmed Hassanein, M. Elhawary, Nour Jaber, Mohammed El-Abd","doi":"10.1109/ICAR.2015.7251507","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251507","url":null,"abstract":"The field of firefighting has long been a dangerous one, and there have been numerous and devastating losses because of a lack in technological advancement. Additionally, the current methods applied in firefighting are inadequate and inefficient relying heavily on humans who are prone to error, no matter how extensively they have been trained. A recent trend that has become popular is to use robots instead of humans to handle fire hazards. This is mainly because they can be used in situations that are too dangerous for any individual to involve themselves in. In our project, we develop a robot that is able to locate and extinguish fire in a given environment. The robot navigates the arena and avoids any obstacles it faces in its excursion.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115450299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-27DOI: 10.1109/ICAR.2015.7251501
Minas Liarokapis, A. Dollar, K. Kyriakopoulos
In this paper, we propose a methodology for closed-loop, humanlike, task-specific reaching and grasping with redundant robot arms and low-complexity robot hands. Human demonstrations are utilized in a learn by demonstration fashion, in order to map human to humanlike robot motion. Principal Components Analysis (PCA) is used to transform the humanlike robot motion in a low-dimensional manifold, where appropriate Navigation Function (NF) models are trained. A series of grasp quality measures, as well as task compatibility indexes are employed to guarantee robustness of the computed grasps and task specificity of goal robot configurations. The final scheme provides anthropomorphic robot motion, task-specific robot arm configurations and hand grasping postures, optimized fingertips placement on the object surface (that results to robust grasps) and guaranteed convergence to the desired goals. The position and geometry of the objects are considered a-priori known. The efficiency of the proposed methods is assessed with simulations and experiments that involve different robot arm hand systems. The proposed scheme can be useful for various Human Robot Interaction (HRI) applications.
{"title":"Humanlike, task-specific reaching and grasping with redundant arms and low-complexity hands","authors":"Minas Liarokapis, A. Dollar, K. Kyriakopoulos","doi":"10.1109/ICAR.2015.7251501","DOIUrl":"https://doi.org/10.1109/ICAR.2015.7251501","url":null,"abstract":"In this paper, we propose a methodology for closed-loop, humanlike, task-specific reaching and grasping with redundant robot arms and low-complexity robot hands. Human demonstrations are utilized in a learn by demonstration fashion, in order to map human to humanlike robot motion. Principal Components Analysis (PCA) is used to transform the humanlike robot motion in a low-dimensional manifold, where appropriate Navigation Function (NF) models are trained. A series of grasp quality measures, as well as task compatibility indexes are employed to guarantee robustness of the computed grasps and task specificity of goal robot configurations. The final scheme provides anthropomorphic robot motion, task-specific robot arm configurations and hand grasping postures, optimized fingertips placement on the object surface (that results to robust grasps) and guaranteed convergence to the desired goals. The position and geometry of the objects are considered a-priori known. The efficiency of the proposed methods is assessed with simulations and experiments that involve different robot arm hand systems. The proposed scheme can be useful for various Human Robot Interaction (HRI) applications.","PeriodicalId":432004,"journal":{"name":"2015 International Conference on Advanced Robotics (ICAR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132509050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}