Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206114
A. Seino, Yuta Wakabayashi, J. Kinugawa, K. Kosuge
In this study, we propose a control strategy for a power-assisted cart based on its motion state. The power-assisted cart we developed has one motor, a differential gear, and brakes. This cart uses the motor and the differential gear for moving forward, and applying brakes to either wheel allows the cart to turn both left and right. Therefore, the power-assisted cart can support the user when going straight and turning despite having only one motor. In the past we developed a control method that allows to control the cart's speed around the operation point in order to keep its magnitude constant when the cart starts turning. This was necessary, as the differential gear causes a speed change during turning, because of its characteristics. However, the desired behavior when transitioning from straight motion to turning motion is different to the desired behavior when going from turning motion to straight motion. Therefore, in this paper we propose a control method to adjust the speed in the direction of motion based on the state of the cart. We validated the effectiveness of the proposed method through experiments and discussed the results.
{"title":"Control method of power-assisted cart with one motor, a differential gear, and brakes based on motion state of the cart","authors":"A. Seino, Yuta Wakabayashi, J. Kinugawa, K. Kosuge","doi":"10.1109/IROS.2017.8206114","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206114","url":null,"abstract":"In this study, we propose a control strategy for a power-assisted cart based on its motion state. The power-assisted cart we developed has one motor, a differential gear, and brakes. This cart uses the motor and the differential gear for moving forward, and applying brakes to either wheel allows the cart to turn both left and right. Therefore, the power-assisted cart can support the user when going straight and turning despite having only one motor. In the past we developed a control method that allows to control the cart's speed around the operation point in order to keep its magnitude constant when the cart starts turning. This was necessary, as the differential gear causes a speed change during turning, because of its characteristics. However, the desired behavior when transitioning from straight motion to turning motion is different to the desired behavior when going from turning motion to straight motion. Therefore, in this paper we propose a control method to adjust the speed in the direction of motion based on the state of the cart. We validated the effectiveness of the proposed method through experiments and discussed the results.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"13 1","pages":"2829-2834"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73588593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206061
Patrick E. Lancaster, Boling Yang, Joshua R. Smith
For certain manipulation tasks, object pose estimation from head-mounted cameras may not be sufficiently accurate. This is at least in part due to our inability to perfectly calibrate the coordinate frames of today's high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot's end effector, thereby avoiding error caused by miscalibration of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we use a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. Compared to two simpler region proposal methods, we find that our deep neural network performs significantly better. In addition, we find that after a sequence of scans, objects can typically be localized to within 0.5 cm of their true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan.
{"title":"Improved object pose estimation via deep pre-touch sensing","authors":"Patrick E. Lancaster, Boling Yang, Joshua R. Smith","doi":"10.1109/IROS.2017.8206061","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206061","url":null,"abstract":"For certain manipulation tasks, object pose estimation from head-mounted cameras may not be sufficiently accurate. This is at least in part due to our inability to perfectly calibrate the coordinate frames of today's high degree of freedom robot arms that link the head to the end-effectors. We present a novel framework combining pre-touch sensing and deep learning to more accurately estimate pose in an efficient manner. The use of pre-touch sensing allows our method to localize the object directly with respect to the robot's end effector, thereby avoiding error caused by miscalibration of the arms. Instead of requiring the robot to scan the entire object with its pre-touch sensor, we use a deep neural network to detect object regions that contain distinctive geometric features. By focusing pre-touch sensing on these regions, the robot can more efficiently gather the information necessary to adjust its original pose estimate. Our region detection network was trained using a new dataset containing objects of widely varying geometries and has been labeled in a scalable fashion that is free from human bias. This dataset is applicable to any task that involves a pre-touch sensor gathering geometric information, and has been made publicly available. We evaluate our framework by having the robot re-estimate the pose of a number of objects of varying geometries. Compared to two simpler region proposal methods, we find that our deep neural network performs significantly better. In addition, we find that after a sequence of scans, objects can typically be localized to within 0.5 cm of their true position. We also observe that the original pose estimate can often be significantly improved after collecting a single quick scan.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"2448-2455"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75263405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206256
D. Pavlichenko, Sven Behnke
Performing manipulation with robotic arms requires a method for planning trajectories that takes multiple factors into account: collisions, joint limits, orientation constraints, torques, and duration of a trajectory. We present an approach to efficiently optimize arm trajectories with respect to multiple criteria. Our work extends Stochastic Trajectory Optimization for Motion Planning (STOMP). We optimize trajectory duration by including velocity into the optimization. We propose an efficient cost function with normalized components, which allows prioritizing components depending on user-specified requirements. Optimization is done in two stages: first with a partial cost function and in the second stage with full costs. We compare our method to state-of-the art methods. In addition, we perform experiments on real robots: centaur-like robot Momaro and an industrial manipulator.
{"title":"Efficient stochastic multicriteria arm trajectory optimization","authors":"D. Pavlichenko, Sven Behnke","doi":"10.1109/IROS.2017.8206256","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206256","url":null,"abstract":"Performing manipulation with robotic arms requires a method for planning trajectories that takes multiple factors into account: collisions, joint limits, orientation constraints, torques, and duration of a trajectory. We present an approach to efficiently optimize arm trajectories with respect to multiple criteria. Our work extends Stochastic Trajectory Optimization for Motion Planning (STOMP). We optimize trajectory duration by including velocity into the optimization. We propose an efficient cost function with normalized components, which allows prioritizing components depending on user-specified requirements. Optimization is done in two stages: first with a partial cost function and in the second stage with full costs. We compare our method to state-of-the art methods. In addition, we perform experiments on real robots: centaur-like robot Momaro and an industrial manipulator.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"2 1","pages":"4018-4025"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75744605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206253
Dawn M. Hustig-Schultz, Vytas SunSpiral, M. Teodorescu
The increasing complexity of soft and hybrid-soft robots highlights the need for more efficient methods of minimizing machine learning solution spaces, and creative ways to ease the process of rapid prototyping. In this paper, we present an initial exploration of this process, using hand-chosen morphologies. Four different choices of muscle groups will be actuated on a tensegrity quadruped called MountainGoat: three for a primarily spine-driven morphology, and one for a primarily leg-driven morphology, and the locomotion speed will be compared. Each iteration of design seeks to reduce the total number of active muscles, and consequently reduce the dimensionality of the problem for machine learning, while still producing effective locomotion. The reduction in active muscles seeks to simplify future rapid prototyping of the robot.
{"title":"Morphological optimization for tensegrity quadruped locomotion","authors":"Dawn M. Hustig-Schultz, Vytas SunSpiral, M. Teodorescu","doi":"10.1109/IROS.2017.8206253","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206253","url":null,"abstract":"The increasing complexity of soft and hybrid-soft robots highlights the need for more efficient methods of minimizing machine learning solution spaces, and creative ways to ease the process of rapid prototyping. In this paper, we present an initial exploration of this process, using hand-chosen morphologies. Four different choices of muscle groups will be actuated on a tensegrity quadruped called MountainGoat: three for a primarily spine-driven morphology, and one for a primarily leg-driven morphology, and the locomotion speed will be compared. Each iteration of design seeks to reduce the total number of active muscles, and consequently reduce the dimensionality of the problem for machine learning, while still producing effective locomotion. The reduction in active muscles seeks to simplify future rapid prototyping of the robot.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"1 1","pages":"3990-3995"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73856749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8202229
J. Ore, Sebastian G. Elbaum, Carrick Detweiler
This work presents a study of robot software using the Robot Operating System (ROS), focusing on detecting inconsistencies in physical unit manipulation. We discuss how dimensional analysis, the rules governing how physical quantities are combined, can be used to detect inconsistencies in robot software that are otherwise difficult to detect. Using a corpus of ROS software with 5.9M lines of code, we measure the frequency of these dimensional inconsistencies and find them in 6% (211 / 3,484) of repositories that use ROS. We find that the inconsistency type ‘Assigning multiple units to a variable’ accounts for 75% of inconsistencies in ROS code. We identify the ROS classes and physical units most likely to be involved with dimensional inconsistencies, and find that the ROS Message type geometry_msgs::Twist is involved in over half of all inconsistencies and is used by developers in ways contrary to Twist's intent. We further analyze the frequency of physical units used in ROS programs as a proxy for assessing how developers use ROS, and discuss the practical implications of our results including how to detect and avoid these inconsistencies.
{"title":"Dimensional inconsistencies in code and ROS messages: A study of 5.9M lines of code","authors":"J. Ore, Sebastian G. Elbaum, Carrick Detweiler","doi":"10.1109/IROS.2017.8202229","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202229","url":null,"abstract":"This work presents a study of robot software using the Robot Operating System (ROS), focusing on detecting inconsistencies in physical unit manipulation. We discuss how dimensional analysis, the rules governing how physical quantities are combined, can be used to detect inconsistencies in robot software that are otherwise difficult to detect. Using a corpus of ROS software with 5.9M lines of code, we measure the frequency of these dimensional inconsistencies and find them in 6% (211 / 3,484) of repositories that use ROS. We find that the inconsistency type ‘Assigning multiple units to a variable’ accounts for 75% of inconsistencies in ROS code. We identify the ROS classes and physical units most likely to be involved with dimensional inconsistencies, and find that the ROS Message type geometry_msgs::Twist is involved in over half of all inconsistencies and is used by developers in ways contrary to Twist's intent. We further analyze the frequency of physical units used in ROS programs as a proxy for assessing how developers use ROS, and discuss the practical implications of our results including how to detect and avoid these inconsistencies.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"80 1","pages":"712-718"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75893246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8202284
A. Kudryavtsev, S. Dembélé, N. L. Fort-Piat
Estimation of 3D object position is a crucial step for a variety of robotics and computer vision applications including 3D reconstruction and object manipulation. When working in microscale, new types of visual sensors are used such as Scanning Electron Microscope (SEM). Nowadays, micro- and nanomanipulation tasks, namely components assembly, are performed in teleoperated mode in most of the cases. Measuring object position and orientation is a crucial step towards automatic object handling. Current methods of pose estimation in SEM allow recovering full object movement using its computer-aided design (CAD) model. If the model is not known, most methods allow to estimate only in-plane translations and rotation around camera optical axis. In the literature, SEM is considered as a camera with parallel projection or an affine camera, which means image invariance to z-movement and bas-relief ambiguity. In this paper, authors address the problem of measuring full 3D rotation of the unknown scene for uncalibrated SEM without additional sensors. Rotations are estimated from image triplets by solving a spherical triangle from fundamental matrices only, without need of intrinsic calibration, allowing to avoid parallel projection ambiguities. The presented results, obtained in simulation and on real data, allow validating the proposed scheme.
{"title":"Full 3D rotation estimation in scanning electron microscope","authors":"A. Kudryavtsev, S. Dembélé, N. L. Fort-Piat","doi":"10.1109/IROS.2017.8202284","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202284","url":null,"abstract":"Estimation of 3D object position is a crucial step for a variety of robotics and computer vision applications including 3D reconstruction and object manipulation. When working in microscale, new types of visual sensors are used such as Scanning Electron Microscope (SEM). Nowadays, micro- and nanomanipulation tasks, namely components assembly, are performed in teleoperated mode in most of the cases. Measuring object position and orientation is a crucial step towards automatic object handling. Current methods of pose estimation in SEM allow recovering full object movement using its computer-aided design (CAD) model. If the model is not known, most methods allow to estimate only in-plane translations and rotation around camera optical axis. In the literature, SEM is considered as a camera with parallel projection or an affine camera, which means image invariance to z-movement and bas-relief ambiguity. In this paper, authors address the problem of measuring full 3D rotation of the unknown scene for uncalibrated SEM without additional sensors. Rotations are estimated from image triplets by solving a spherical triangle from fundamental matrices only, without need of intrinsic calibration, allowing to avoid parallel projection ambiguities. The presented results, obtained in simulation and on real data, allow validating the proposed scheme.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"32 1","pages":"1134-1139"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79137055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206437
Daehyung Park, Hokeun Kim, Yuuna Hoshi, Zackory M. Erickson, Ariel Kapusta, C. Kemp
Activities of daily living (ADLs) are important for quality of life. Robotic assistance offers the opportunity for people with disabilities to perform ADLs on their own. However, when a complex semi-autonomous system provides real-world assistance, occasional anomalies are likely to occur. Robots that can detect, classify and respond appropriately to common anomalies have the potential to provide more effective and safer assistance. We introduce a multimodal execution monitor to detect and classify anomalous executions when robots operate near humans. Our system builds on our past work on multimodal anomaly detection. Our new monitor classifies the type and cause of common anomalies using an artificial neural network. We implemented and evaluated our execution monitor in the context of robot-assisted feeding with a general-purpose mobile manipulator. In our evaluations, our monitor outperformed baseline methods from the literature. It succeeded in detecting 12 common anomalies from 8 able-bodied participants with 83% accuracy and classifying the types and causes of the detected anomalies with 90% and 81% accuracies, respectively. We then performed an in-home evaluation with Henry Evans, a person with severe quadriplegia. With our system, Henry successfully fed himself while the monitor detected, classified the types, and classified the causes of anomalies with 86%, 90%, and 54% accuracy, respectively.
{"title":"A multimodal execution monitor with anomaly classification for robot-assisted feeding","authors":"Daehyung Park, Hokeun Kim, Yuuna Hoshi, Zackory M. Erickson, Ariel Kapusta, C. Kemp","doi":"10.1109/IROS.2017.8206437","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206437","url":null,"abstract":"Activities of daily living (ADLs) are important for quality of life. Robotic assistance offers the opportunity for people with disabilities to perform ADLs on their own. However, when a complex semi-autonomous system provides real-world assistance, occasional anomalies are likely to occur. Robots that can detect, classify and respond appropriately to common anomalies have the potential to provide more effective and safer assistance. We introduce a multimodal execution monitor to detect and classify anomalous executions when robots operate near humans. Our system builds on our past work on multimodal anomaly detection. Our new monitor classifies the type and cause of common anomalies using an artificial neural network. We implemented and evaluated our execution monitor in the context of robot-assisted feeding with a general-purpose mobile manipulator. In our evaluations, our monitor outperformed baseline methods from the literature. It succeeded in detecting 12 common anomalies from 8 able-bodied participants with 83% accuracy and classifying the types and causes of the detected anomalies with 90% and 81% accuracies, respectively. We then performed an in-home evaluation with Henry Evans, a person with severe quadriplegia. With our system, Henry successfully fed himself while the monitor detected, classified the types, and classified the causes of anomalies with 86%, 90%, and 54% accuracy, respectively.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"57 1","pages":"5406-5413"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79257634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206484
Anh Nguyen, D. Kanoulas, D. Caldwell, N. Tsagarakis
We present a new method to detect object affordances in real-world scenes using deep Convolutional Neural Networks (CNN), an object detector and dense Conditional Random Fields (CRF). Our system first trains an object detector to generate bounding box candidates from the images. A deep CNN is then used to learn the depth features from these bounding boxes. Finally, these feature maps are post-processed with dense CRF to improve the prediction along class boundaries. The experimental results on our new challenging dataset show that the proposed approach outperforms recent state-of-the-art methods by a substantial margin. Furthermore, from the detected affordances we introduce a grasping method that is robust to noisy data. We demonstrate the effectiveness of our framework on the full-size humanoid robot WALK-MAN using different objects in real-world scenarios.
{"title":"Object-based affordances detection with Convolutional Neural Networks and dense Conditional Random Fields","authors":"Anh Nguyen, D. Kanoulas, D. Caldwell, N. Tsagarakis","doi":"10.1109/IROS.2017.8206484","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206484","url":null,"abstract":"We present a new method to detect object affordances in real-world scenes using deep Convolutional Neural Networks (CNN), an object detector and dense Conditional Random Fields (CRF). Our system first trains an object detector to generate bounding box candidates from the images. A deep CNN is then used to learn the depth features from these bounding boxes. Finally, these feature maps are post-processed with dense CRF to improve the prediction along class boundaries. The experimental results on our new challenging dataset show that the proposed approach outperforms recent state-of-the-art methods by a substantial margin. Furthermore, from the detected affordances we introduce a grasping method that is robust to noisy data. We demonstrate the effectiveness of our framework on the full-size humanoid robot WALK-MAN using different objects in real-world scenarios.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"11 1","pages":"5908-5915"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79491931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8206024
Ran Shi, Xiang Zhang, Y. Lou
The contouring error and cross-coupled gains calculation have always been the critical issues in the application of cross-coupled control. Traditionally, the linear approximation and circular approximation are widely used to determine the contouring error and cross-coupled gains. However, for linear approximation and circular approximation, the contouring error and cross-coupled gains are calculated sophisticatedly, especially in three-dimensional applications. In this paper, a contouring error vector is established under task coordinate frame, then the contouring error and cross-coupled gains can be easily obtained based on the magnitude and orientation of the contouring error vector. The experimental results on a three-axis CNC machine indicate the proposed approach simplifies the calculation of contouring error and cross-coupled gains.
{"title":"Contouring error vector and cross-coupled control of multi-axis servo system","authors":"Ran Shi, Xiang Zhang, Y. Lou","doi":"10.1109/IROS.2017.8206024","DOIUrl":"https://doi.org/10.1109/IROS.2017.8206024","url":null,"abstract":"The contouring error and cross-coupled gains calculation have always been the critical issues in the application of cross-coupled control. Traditionally, the linear approximation and circular approximation are widely used to determine the contouring error and cross-coupled gains. However, for linear approximation and circular approximation, the contouring error and cross-coupled gains are calculated sophisticatedly, especially in three-dimensional applications. In this paper, a contouring error vector is established under task coordinate frame, then the contouring error and cross-coupled gains can be easily obtained based on the magnitude and orientation of the contouring error vector. The experimental results on a three-axis CNC machine indicate the proposed approach simplifies the calculation of contouring error and cross-coupled gains.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"29 1","pages":"2062-2067"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81435004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/IROS.2017.8202176
Diego Ospina, A. Ramirez-Serrano
This paper presents a kinetostatic framework to analyze the grasping and in-hand object manipulation abilities of two-finger underactuated hands. The framework includes a procedure to compute the Grasp Matrix and the Hand Jacobian for objects and fingertips of arbitrary shape considering rolling contacts without slipping. The usefulness of the proposed approach is illustrated in a case study of a pair of underactuated fingers driven by a tendon-pulley differential transmission mechanism and capable of performing in-hand object manipulation. The manipulability region for different object and fingertip shapes is computed and the results are discussed.
{"title":"Influence of fingertip and object shape on the manipulation ability of underactuated hands","authors":"Diego Ospina, A. Ramirez-Serrano","doi":"10.1109/IROS.2017.8202176","DOIUrl":"https://doi.org/10.1109/IROS.2017.8202176","url":null,"abstract":"This paper presents a kinetostatic framework to analyze the grasping and in-hand object manipulation abilities of two-finger underactuated hands. The framework includes a procedure to compute the Grasp Matrix and the Hand Jacobian for objects and fingertips of arbitrary shape considering rolling contacts without slipping. The usefulness of the proposed approach is illustrated in a case study of a pair of underactuated fingers driven by a tendon-pulley differential transmission mechanism and capable of performing in-hand object manipulation. The manipulability region for different object and fingertip shapes is computed and the results are discussed.","PeriodicalId":6658,"journal":{"name":"2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)","volume":"25 1","pages":"329-334"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81496817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}