首页 > 最新文献

2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)最新文献

英文 中文
Whole-body walking pattern using pelvis-rotation for long stride and arm swing for yaw angular momentum compensation 全身步行模式使用骨盆旋转大跨步和手臂摆动偏航角动量补偿
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555794
Beomyeong Park, Myeong-Ju Kim, E. Sung, Junhyung Kim, Jaeheung Park
A long stride can enable a humanoid robot achieve fast and stable walking. For a long stride, the kinematics of the robot should be fully utilized, and walking with pelvic rotation can be a solution. A rotational trajectory of pelvis considering kinematic limitations is needed for pelvis-rotation walking. When the robot walks with a long stride while rotating the pelvis, the yaw momentum may be larger than that when walks with the pelvis fixed. This is caused by the rotation of the pelvis and leg motion, and hence, walking with pelvic rotation may become unstable. In this paper, we propose to control the lower body of a robot as a redundant system with leg joints and a waist joint for walking with pelvic rotation. The position of the base frame to implement the redundant system for the lower body of the robot is also proposed. In addition, the a quadratic programming (QP) controller is formulated to enable arm swing for yaw momentum compensation while controlling the lower body. The feasibility of the proposed control method was verified using a simulation and an experiment of walking with a long stride while rotating the pelvis using a QP controller and compensating the yaw momentum by means of arm swing.
较大的步幅可以使仿人机器人实现快速稳定的行走。对于大跨步,应充分利用机器人的运动学,骨盆旋转行走可以是一种解决方案。骨盆旋转行走需要考虑到运动学限制的骨盆旋转轨迹。当机器人在旋转骨盆的情况下大步行走时,其偏航动量可能大于骨盆固定行走时的偏航动量。这是由骨盆旋转和腿部运动引起的,因此,骨盆旋转行走可能会变得不稳定。在本文中,我们提出将机器人下半身控制为一个具有腿部关节和腰部关节的冗余系统,用于骨盆旋转行走。提出了实现机器人下体冗余系统的基础框架的位置。此外,设计了二次规划(QP)控制器,在控制下体的同时,实现了臂摆的偏航动量补偿。通过仿真和实验验证了该控制方法的可行性,该控制方法采用QP控制器进行大跨步行走,同时旋转骨盆并通过手臂摆动补偿偏摆动量。
{"title":"Whole-body walking pattern using pelvis-rotation for long stride and arm swing for yaw angular momentum compensation","authors":"Beomyeong Park, Myeong-Ju Kim, E. Sung, Junhyung Kim, Jaeheung Park","doi":"10.1109/HUMANOIDS47582.2021.9555794","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555794","url":null,"abstract":"A long stride can enable a humanoid robot achieve fast and stable walking. For a long stride, the kinematics of the robot should be fully utilized, and walking with pelvic rotation can be a solution. A rotational trajectory of pelvis considering kinematic limitations is needed for pelvis-rotation walking. When the robot walks with a long stride while rotating the pelvis, the yaw momentum may be larger than that when walks with the pelvis fixed. This is caused by the rotation of the pelvis and leg motion, and hence, walking with pelvic rotation may become unstable. In this paper, we propose to control the lower body of a robot as a redundant system with leg joints and a waist joint for walking with pelvic rotation. The position of the base frame to implement the redundant system for the lower body of the robot is also proposed. In addition, the a quadratic programming (QP) controller is formulated to enable arm swing for yaw momentum compensation while controlling the lower body. The feasibility of the proposed control method was verified using a simulation and an experiment of walking with a long stride while rotating the pelvis using a QP controller and compensating the yaw momentum by means of arm swing.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126945446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Human-Aware Method to Plan Complex Cooperative and Autonomous Tasks using Behavior Trees 基于行为树的复杂协同自主任务规划方法研究
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555683
Fabio Fusaro, Edoardo Lamon, E. Momi, A. Ajoudani
This paper proposes a novel human-aware method that generates robot plans for autonomous and human-robot cooperative tasks in industrial environments. We modify the standard Behavior Trees (BTs) formulation in order to take into account the action-related costs, and design suitable metrics and cost functions to account for the cooperation with a worker considering human availability, decisions, and ergonomics. The developed approach allows the robot to online adapt its plan to the human partner, by choosing the tasks that minimize the execution cost(s). Through simulations, we first tuned the weights of the cost function for a realistic scenario. Subsequently, the developed method is validated through a proof-of-concept experiment representing the boxing of 4 different objects. The results show that the proposed cost-based BTs, along with the defined costs, enable the robot to online react and plan new tasks according to the dynamic changes of the environment, in terms of human presence and intentions. Our results indicate that the proposed solution demonstrates high potential in increasing robot reactivity and flexibility while, at the same time, in optimizing the decision-making process according to human actions.
针对工业环境下的自主任务和人机协作任务,提出了一种基于人感知的机器人计划生成方法。我们修改了标准的行为树(bt)公式,以便考虑到与行动相关的成本,并设计合适的指标和成本函数,以考虑与工人的合作,考虑人力可用性,决策和人体工程学。所开发的方法允许机器人通过选择执行成本最小的任务来在线调整其计划以适应人类伙伴。通过模拟,我们首先针对现实场景调整了成本函数的权重。随后,通过代表4个不同对象的装箱的概念验证实验验证了所开发的方法。结果表明,提出的基于成本的BTs,以及定义的成本,使机器人能够根据环境的动态变化,在人的存在和意图方面在线反应和计划新的任务。我们的研究结果表明,所提出的解决方案在提高机器人的反应性和灵活性方面具有很高的潜力,同时在根据人类行为优化决策过程方面也具有很高的潜力。
{"title":"A Human-Aware Method to Plan Complex Cooperative and Autonomous Tasks using Behavior Trees","authors":"Fabio Fusaro, Edoardo Lamon, E. Momi, A. Ajoudani","doi":"10.1109/HUMANOIDS47582.2021.9555683","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555683","url":null,"abstract":"This paper proposes a novel human-aware method that generates robot plans for autonomous and human-robot cooperative tasks in industrial environments. We modify the standard Behavior Trees (BTs) formulation in order to take into account the action-related costs, and design suitable metrics and cost functions to account for the cooperation with a worker considering human availability, decisions, and ergonomics. The developed approach allows the robot to online adapt its plan to the human partner, by choosing the tasks that minimize the execution cost(s). Through simulations, we first tuned the weights of the cost function for a realistic scenario. Subsequently, the developed method is validated through a proof-of-concept experiment representing the boxing of 4 different objects. The results show that the proposed cost-based BTs, along with the defined costs, enable the robot to online react and plan new tasks according to the dynamic changes of the environment, in terms of human presence and intentions. Our results indicate that the proposed solution demonstrates high potential in increasing robot reactivity and flexibility while, at the same time, in optimizing the decision-making process according to human actions.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130418137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Footstep and Timing Adaptation for Humanoid Robots Utilizing Pre-computation of Capture Regions 基于捕获区域预计算的仿人机器人脚步与时序自适应
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555675
Y. Tazaki
This study proposes a real-time footstep and timing adaptation mechanism for humanoid robots that can be integrated into a conventional walking pattern generator and increase the robustness of walking against disturbances. In order to meet the strict real-time constraint of humanoid robot control, the proposed method computes viable capture basins in the design phase. This pre-computed data can be used at runtime to modify the foot placement, the timing of landing, and the center-of-mass movement in response to applied disturbances with small computation cost. The performance of the proposed method is evaluated in simulation experiments.
本研究提出了一种人形机器人的实时脚步和时间适应机制,该机制可以集成到传统的步行模式生成器中,提高步行对干扰的鲁棒性。为了满足仿人机器人控制的严格实时性约束,提出的方法在设计阶段计算可行的捕获池。这些预先计算的数据可以在运行时使用,以很小的计算成本来修改脚的位置、着陆的时间和质心的运动,以响应应用的干扰。仿真实验对该方法的性能进行了评价。
{"title":"Footstep and Timing Adaptation for Humanoid Robots Utilizing Pre-computation of Capture Regions","authors":"Y. Tazaki","doi":"10.1109/HUMANOIDS47582.2021.9555675","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555675","url":null,"abstract":"This study proposes a real-time footstep and timing adaptation mechanism for humanoid robots that can be integrated into a conventional walking pattern generator and increase the robustness of walking against disturbances. In order to meet the strict real-time constraint of humanoid robot control, the proposed method computes viable capture basins in the design phase. This pre-computed data can be used at runtime to modify the foot placement, the timing of landing, and the center-of-mass movement in response to applied disturbances with small computation cost. The performance of the proposed method is evaluated in simulation experiments.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134569779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Vision for Prosthesis Control Using Unsupervised Labeling of Training Data 基于训练数据无监督标记的假肢视觉控制
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555789
Vijeth Rai, David Boe, E. Rombokas
Transitioning from one activity to another is one of the key challenges of prosthetic control. Vision sensors provide a glance into the environment’s desired and future movements, unlike body sensors (EMG, mechanical). This could be employed to anticipate and trigger transitions in prosthesis to provide a smooth user experience. A significant bottleneck in using vision sensors has been the acquisition of large labeled training data. Labeling the terrain in thousands of images is labor-intensive; it would be ideal to simply collect visual data for long periods without needing to label each frame. Toward that goal, we apply an unsupervised learning method to generate mode labels for kinematic gait cycles in training data. We use these labels with images from the same training data to train a vision classifier. The classifier predicts the target mode an average of 2.2 seconds before the kinematic changes. We report 96.6% overall and 99.5% steady-state mode classification accuracy. These results are comparable to studies using manually labeled data. This method, however, has the potential to dramatically scale without requiring additional labeling.
从一个活动过渡到另一个是假肢控制的关键挑战之一。与身体传感器(肌电图,机械)不同,视觉传感器提供了对环境的期望和未来运动的一瞥。这可以用来预测和触发假体的过渡,以提供平滑的用户体验。使用视觉传感器的一个重要瓶颈是获取大量标记训练数据。在数千张图像中标记地形是一项劳动密集型工作;理想的做法是简单地收集长时间的视觉数据,而不需要标记每一帧。为了实现这一目标,我们应用一种无监督学习方法来生成训练数据中运动学步态周期的模式标签。我们使用这些标签和来自相同训练数据的图像来训练视觉分类器。该分类器平均在运动变化前2.2秒预测目标模式。我们报告了96.6%的总体和99.5%的稳态模式分类准确率。这些结果与使用人工标记数据的研究结果相当。然而,这种方法在不需要额外标记的情况下具有显着扩展的潜力。
{"title":"Vision for Prosthesis Control Using Unsupervised Labeling of Training Data","authors":"Vijeth Rai, David Boe, E. Rombokas","doi":"10.1109/HUMANOIDS47582.2021.9555789","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555789","url":null,"abstract":"Transitioning from one activity to another is one of the key challenges of prosthetic control. Vision sensors provide a glance into the environment’s desired and future movements, unlike body sensors (EMG, mechanical). This could be employed to anticipate and trigger transitions in prosthesis to provide a smooth user experience. A significant bottleneck in using vision sensors has been the acquisition of large labeled training data. Labeling the terrain in thousands of images is labor-intensive; it would be ideal to simply collect visual data for long periods without needing to label each frame. Toward that goal, we apply an unsupervised learning method to generate mode labels for kinematic gait cycles in training data. We use these labels with images from the same training data to train a vision classifier. The classifier predicts the target mode an average of 2.2 seconds before the kinematic changes. We report 96.6% overall and 99.5% steady-state mode classification accuracy. These results are comparable to studies using manually labeled data. This method, however, has the potential to dramatically scale without requiring additional labeling.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130987339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Detection of Collaboration and Collision Events during Contact Task Execution 联系任务执行过程中协作和冲突事件的检测
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555677
Felix Franzel, Thomas Eiband, Dongheui Lee
This work introduces a contact event pipeline to distinguish task-contact from Human-Robot interaction and collision during task execution. The increasing need for close proximity physical human-robot interaction (pHRI) in the private, health and industrial sector demands for new safety solutions. One of the most important issues regarding safe collaboration is the robust recognition and classification of contacts between human and robot. A solution is designed, that enables simple task teaching and accurate contact monitoring during task execution. Besides an external force and torque sensor, only proprioceptive data is used for the contact evaluation. An approach based on demonstrated task knowledge and the offset resulting from human interaction is designed to distinguish contact events from normal execution by a contact event detector. A contact type classifier implemented as Support Vector Machine is trained with the identified events. The system is set up to quickly identify contact incidents and enable appropriate robot reactions. An offline evaluation is conducted with data recorded from intended and unintended contacts as well as examples of task-contacts like object manipulation and environmental interactions. The system’s performance and its high responsiveness are evaluated in different experiments including a real world task.
该工作引入了接触事件管道,以区分任务接触与任务执行过程中的人机交互和碰撞。私营、卫生和工业部门对近距离人机物理交互(pHRI)的需求日益增加,需要新的安全解决方案。安全协作中最重要的问题之一是人与机器人之间接触的鲁棒识别和分类。设计了一种解决方案,可以在任务执行过程中实现简单的任务教学和准确的接触监测。除了外力和扭矩传感器外,只有本体感觉数据用于接触评估。设计了一种基于演示任务知识和人类交互产生的偏移量的方法,通过接触事件检测器将接触事件与正常执行区分开来。使用识别出的事件训练一个基于支持向量机的接触类型分类器。该系统的设置是为了快速识别接触事件并使机器人做出适当的反应。通过记录有意和无意接触的数据以及任务接触(如对象操作和环境交互)的示例,进行离线评估。系统的性能和高响应性在不同的实验中进行了评估,包括一个真实世界的任务。
{"title":"Detection of Collaboration and Collision Events during Contact Task Execution","authors":"Felix Franzel, Thomas Eiband, Dongheui Lee","doi":"10.1109/HUMANOIDS47582.2021.9555677","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555677","url":null,"abstract":"This work introduces a contact event pipeline to distinguish task-contact from Human-Robot interaction and collision during task execution. The increasing need for close proximity physical human-robot interaction (pHRI) in the private, health and industrial sector demands for new safety solutions. One of the most important issues regarding safe collaboration is the robust recognition and classification of contacts between human and robot. A solution is designed, that enables simple task teaching and accurate contact monitoring during task execution. Besides an external force and torque sensor, only proprioceptive data is used for the contact evaluation. An approach based on demonstrated task knowledge and the offset resulting from human interaction is designed to distinguish contact events from normal execution by a contact event detector. A contact type classifier implemented as Support Vector Machine is trained with the identified events. The system is set up to quickly identify contact incidents and enable appropriate robot reactions. An offline evaluation is conducted with data recorded from intended and unintended contacts as well as examples of task-contacts like object manipulation and environmental interactions. The system’s performance and its high responsiveness are evaluated in different experiments including a real world task.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117288574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Gait Percent Estimation during Walking and Running using Sagittal Shank or Thigh Angles 用小腿或大腿矢状角度估计走路和跑步时的步态百分比
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555673
M. Eslamy, A. Schilling
In this work we analyzed the relationship between the shank and thigh angles (separately) and the gait cycle progression, to develop a novel approach for gait percent estimation. To do so, the angles were integrated. Our findings show that the integral of shank and thigh angle has a monotonic behavior and therefore can approximate the gait percents during a gait cycle through a one-to-one relationship. For all of the individuals, speeds and gaits a quasi-linear relationship was found between the shank and thigh angle integrals and the gait percents. Average $mathrm{R}^{2}$ values close to one and average RMS errors less than 2.2 were achieved. The proposed approach was investigated for different subjects (21 subjects), speeds (10 speeds) and gaits (walking and running) and can be potentially used for human motion analysis as well as for motion planning of assistive devices.
在这项工作中,我们分析了小腿和大腿角度(分别)与步态周期进展之间的关系,以开发一种新的步态百分比估计方法。为此,对角度进行了积分。我们的研究结果表明,小腿和大腿角度的积分具有单调行为,因此可以通过一对一的关系近似步态周期中的步态百分比。对于所有的个体,速度和步态,在小腿和大腿角度积分和步态百分比之间发现了准线性关系。平均$ mathm {R}^{2}$值接近于1,平均RMS误差小于2.2。所提出的方法针对不同的受试者(21名受试者)、速度(10种速度)和步态(步行和跑步)进行了研究,可以潜在地用于人体运动分析以及辅助设备的运动规划。
{"title":"Gait Percent Estimation during Walking and Running using Sagittal Shank or Thigh Angles","authors":"M. Eslamy, A. Schilling","doi":"10.1109/HUMANOIDS47582.2021.9555673","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555673","url":null,"abstract":"In this work we analyzed the relationship between the shank and thigh angles (separately) and the gait cycle progression, to develop a novel approach for gait percent estimation. To do so, the angles were integrated. Our findings show that the integral of shank and thigh angle has a monotonic behavior and therefore can approximate the gait percents during a gait cycle through a one-to-one relationship. For all of the individuals, speeds and gaits a quasi-linear relationship was found between the shank and thigh angle integrals and the gait percents. Average $mathrm{R}^{2}$ values close to one and average RMS errors less than 2.2 were achieved. The proposed approach was investigated for different subjects (21 subjects), speeds (10 speeds) and gaits (walking and running) and can be potentially used for human motion analysis as well as for motion planning of assistive devices.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134183146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SURENAIV: Towards A Cost-effective Full-size Humanoid Robot for Real-world Scenarios SURENAIV:面向现实世界场景的高性价比全尺寸人形机器人
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555686
A. Yousefi-Koma, B. Maleki, Hessam Maleki, A. Amani, M. Bazrafshani, Hossein Keshavarz, Ala Iranmanesh, A. Yazdanpanah, H. Alai, Sahel Salehi, Mahyar Ashkvari, Milad Mousavi, M. Shafiee-Ashtiani
This paper describes the hardware, software framework, and experimental testing of SURENA IV humanoid robotics platform. SURENA IV has 43 degrees of freedom (DoFs), including seven DoFs for each arm, six DoFs for each hand, and six DoFs for each leg, with a height of 170 cm and a mass of 68 kg and morphological and mass properties similar to an average adult human. SURENA IV aims to realize a cost-effective and anthropomorphic humanoid robot for real-world scenarios. In this way, we demonstrate a locomotion framework based on a novel and inexpensive predictive foot sensor that enables walking with 7cm foot position error because of accumulative error of links and connections’ deflection(that has been manufactured by the tools which are available in the Universities). Thanks to this sensor, the robot can walk on unknown obstacles without any force feedback, by online adaptation of foot height and orientation. Moreover, the arm and hand of the robot have been designed to grasp the objects with different stiffness and geometries that enable the robot to do drilling, visual servoing of a moving object, and writing his name on the white-board.
本文介绍了SURENA IV类人机器人平台的硬件、软件框架和实验测试。SURENA IV具有43个自由度(DoFs),其中每只手臂有7个自由度,每只手有6个自由度,每条腿有6个自由度,高度为170厘米,质量为68公斤,形态和质量特性与普通成年人相似。SURENA IV旨在实现一种具有成本效益和拟人化的人形机器人,用于现实世界的场景。通过这种方式,我们展示了一种基于新颖且廉价的预测足部传感器的运动框架,由于链接和连接挠度的累积误差(已由大学可用的工具制造),该运动框架可以实现7厘米足部位置误差的行走。有了这个传感器,机器人可以在没有任何力反馈的情况下,通过在线调整脚的高度和方向,在未知的障碍物上行走。此外,机器人的手臂和手被设计为能够抓住不同刚度和几何形状的物体,使机器人能够进行钻孔,视觉伺服运动物体,并在白板上写下自己的名字。
{"title":"SURENAIV: Towards A Cost-effective Full-size Humanoid Robot for Real-world Scenarios","authors":"A. Yousefi-Koma, B. Maleki, Hessam Maleki, A. Amani, M. Bazrafshani, Hossein Keshavarz, Ala Iranmanesh, A. Yazdanpanah, H. Alai, Sahel Salehi, Mahyar Ashkvari, Milad Mousavi, M. Shafiee-Ashtiani","doi":"10.1109/HUMANOIDS47582.2021.9555686","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555686","url":null,"abstract":"This paper describes the hardware, software framework, and experimental testing of SURENA IV humanoid robotics platform. SURENA IV has 43 degrees of freedom (DoFs), including seven DoFs for each arm, six DoFs for each hand, and six DoFs for each leg, with a height of 170 cm and a mass of 68 kg and morphological and mass properties similar to an average adult human. SURENA IV aims to realize a cost-effective and anthropomorphic humanoid robot for real-world scenarios. In this way, we demonstrate a locomotion framework based on a novel and inexpensive predictive foot sensor that enables walking with 7cm foot position error because of accumulative error of links and connections’ deflection(that has been manufactured by the tools which are available in the Universities). Thanks to this sensor, the robot can walk on unknown obstacles without any force feedback, by online adaptation of foot height and orientation. Moreover, the arm and hand of the robot have been designed to grasp the objects with different stiffness and geometries that enable the robot to do drilling, visual servoing of a moving object, and writing his name on the white-board.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133296046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers 无约束移交到达运动模型的实验验证与比较:面向人-机器人移交的类人运动生成
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555779
Wesley P. Chan, T. Tran, Sara Sheikholeslami, E. Croft
The Minimum Jerk motion model has long been cited in literature for human point-to-point reaching motions in single-person tasks. While it has been demonstrated that applying minimum-jerk-like trajectories to robot reaching motions in the joint action task of human-robot handovers allows a robot giver to be perceived as more careful, safe, and skilled, it has not been verified whether human reaching motions in handovers follow the Minimum Jerk model. To experimentally test and verify motion models for human reaches in handovers, we examined human reaching motions in unconstrained handovers (where the person is allowed to move their whole body) and fitted against 1) the Minimum Jerk model, 2) its variation, the Decoupled Minimum Jerk model, and 3) the recently proposed Elliptical (Conic) model. Results showed that Conic model fits unconstrained human handover reaching motions best. Furthermore, we discovered that unlike constrained, single-person reaching motions, which have been found to be elliptical, there is a split between elliptical and hyperbolic conic types. We expect our results will help guide generation of more humanlike reaching motions for human-robot handover tasks.
最小震动运动模型长期以来被文献引用用于单人任务中人类点对点到达运动。虽然已经证明,在人机移交的联合动作任务中,将最小抽搐轨迹应用于机器人的伸手运动可以让机器人的给予者被认为是更谨慎、安全和熟练的,但尚未验证人类在移交中的伸手运动是否遵循最小抽搐模型。为了实验测试和验证人类在移交中的运动模型,我们研究了人类在无约束移交(允许人移动整个身体)中的到达运动,并拟合了1)最小震动模型,2)其变化,解耦最小震动模型,以及3)最近提出的椭圆(Conic)模型。结果表明,Conic模型最适合无约束的人类交接到达运动。此外,我们发现,不像约束,单人到达运动,已被发现是椭圆的,有椭圆和双曲圆锥类型之间的分裂。我们希望我们的结果将有助于指导生成更像人类的人机交接任务的到达动作。
{"title":"An Experimental Validation and Comparison of Reaching Motion Models for Unconstrained Handovers: Towards Generating Humanlike Motions for Human-Robot Handovers","authors":"Wesley P. Chan, T. Tran, Sara Sheikholeslami, E. Croft","doi":"10.1109/HUMANOIDS47582.2021.9555779","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555779","url":null,"abstract":"The Minimum Jerk motion model has long been cited in literature for human point-to-point reaching motions in single-person tasks. While it has been demonstrated that applying minimum-jerk-like trajectories to robot reaching motions in the joint action task of human-robot handovers allows a robot giver to be perceived as more careful, safe, and skilled, it has not been verified whether human reaching motions in handovers follow the Minimum Jerk model. To experimentally test and verify motion models for human reaches in handovers, we examined human reaching motions in unconstrained handovers (where the person is allowed to move their whole body) and fitted against 1) the Minimum Jerk model, 2) its variation, the Decoupled Minimum Jerk model, and 3) the recently proposed Elliptical (Conic) model. Results showed that Conic model fits unconstrained human handover reaching motions best. Furthermore, we discovered that unlike constrained, single-person reaching motions, which have been found to be elliptical, there is a split between elliptical and hyperbolic conic types. We expect our results will help guide generation of more humanlike reaching motions for human-robot handover tasks.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121440173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
The KIT Bimanual Manipulation Dataset KIT手工操作数据集
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555788
F. Krebs, Andre Meixner, Isabel Patzer, T. Asfour
Learning models of bimanual manipulation tasks from human demonstration requires capturing human body and hand motions, as well as the objects involved in the demonstration, to provide all the information needed for learning manipulation task models on symbolic and subsymbolic level. We provide a new multi-modal dataset of bimanual manipulation actions consisting of accurate human whole-body motion data, full configuration of both hands, and the 6D pose and trajectories of all objects involved in the task. The data is collected using five different sensor systems: a motion capture system, two data gloves, three RGB-D cameras, a headmounted egocentric camera and three inertial measurement units (IMUs). The dataset includes 12 actions of bimanual daily household activities performed by two healthy subjects with a large number of intra-action variations and three repetitions of each action variation, resulting in 588 recorded demonstrations. A total of 21 household items are used to perform the various actions. In addition to the data collection, we developed tools and methods for the standardized representation and organization of multi-modal sensor data in large-scale human motion databases. We extended our Master Motor Map (MMM) framework to allow the mapping of collected demonstrations to a reference model of the human body as well as the segmentation and annotation of recorded manipulation tasks. The dataset includes raw sensor data, normalized data in the MMM format and annotations, and is made publicly available in the KIT Whole-Body Human Motion Database.
从人体演示中学习双手操作任务模型需要捕获人体和手部动作以及演示中涉及的对象,以提供在符号和亚符号层面学习操作任务模型所需的所有信息。我们提供了一个新的双手操作动作的多模态数据集,包括准确的人体全身运动数据,双手的完整配置,以及任务中所有物体的6D姿态和轨迹。数据收集使用五种不同的传感器系统:一个运动捕捉系统,两个数据手套,三个RGB-D摄像头,一个头戴式自我中心摄像头和三个惯性测量单元(imu)。该数据集包括由两名健康受试者进行的12个手工日常家庭活动动作,其中有大量的动作内变化,每个动作变化重复3次,共记录了588次演示。总共有21个家庭物品被用来执行各种动作。除了数据收集之外,我们还开发了用于大规模人体运动数据库中多模态传感器数据的标准化表示和组织的工具和方法。我们扩展了Master Motor Map (MMM)框架,允许将收集到的演示映射到人体的参考模型,以及对记录的操作任务进行分割和注释。该数据集包括原始传感器数据、规范化的MMM格式数据和注释,并在KIT全身人体运动数据库中公开提供。
{"title":"The KIT Bimanual Manipulation Dataset","authors":"F. Krebs, Andre Meixner, Isabel Patzer, T. Asfour","doi":"10.1109/HUMANOIDS47582.2021.9555788","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555788","url":null,"abstract":"Learning models of bimanual manipulation tasks from human demonstration requires capturing human body and hand motions, as well as the objects involved in the demonstration, to provide all the information needed for learning manipulation task models on symbolic and subsymbolic level. We provide a new multi-modal dataset of bimanual manipulation actions consisting of accurate human whole-body motion data, full configuration of both hands, and the 6D pose and trajectories of all objects involved in the task. The data is collected using five different sensor systems: a motion capture system, two data gloves, three RGB-D cameras, a headmounted egocentric camera and three inertial measurement units (IMUs). The dataset includes 12 actions of bimanual daily household activities performed by two healthy subjects with a large number of intra-action variations and three repetitions of each action variation, resulting in 588 recorded demonstrations. A total of 21 household items are used to perform the various actions. In addition to the data collection, we developed tools and methods for the standardized representation and organization of multi-modal sensor data in large-scale human motion databases. We extended our Master Motor Map (MMM) framework to allow the mapping of collected demonstrations to a reference model of the human body as well as the segmentation and annotation of recorded manipulation tasks. The dataset includes raw sensor data, normalized data in the MMM format and annotations, and is made publicly available in the KIT Whole-Body Human Motion Database.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127578835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Spatial calibration of whole-body artificial skin on a humanoid robot: comparing self-contact, 3D reconstruction, and CAD-based calibration 仿人机器人全身人造皮肤的空间标定:比较自接触、三维重建和基于cad的标定
Pub Date : 2021-07-19 DOI: 10.1109/HUMANOIDS47582.2021.9555806
Lukas Rustler, Bohumila Potočná, Michal Polic, K. Štěpánová, M. Hoffmann
Robots were largely missing the sense of touch for decades. As artificial sensitive skins covering large areas of robot bodies are starting to appear, to be useful to the machines, sensor positions on the robot body are needed. In this work, a Nao humanoid robot was retrofitted with pressure-sensitive skin on the head, torso, and arms. We experimentally compare the accuracy and effort associated with the following skin spatial calibration approaches and their combinations: (i) combining CAD models and skin layout in 2D, (ii) 3D reconstruction from images, (iii) using robot kinematics to calibrate skin by self-contact. To acquire 3D positions of taxels on individual skin parts, methods (i) and (ii) were similarly laborious but 3D reconstruction was more accurate. To align these 3D point clouds with the robot kinematics, two variants of self-contact were employed: skin-on-skin and utilization of a custom end effector (finger). In combination with the 3D reconstruction data, mean calibration errors below the radius of individual sensors were achieved (2 mm). Significant perturbation of more than 100 torso taxel positions could be corrected using self-contact calibration, reaching approx. 3 mm mean error. This work is not a proof of concept but deployment of the approaches at scale: the outcome is actual spatial calibration of all 970 taxels on the robot body. As the different calibration approaches are evaluated in isolation as well as in different combinations, this work provides a guideline applicable to spatial calibration of different sensor arrays.
几十年来,机器人在很大程度上失去了触觉。随着覆盖大面积机器人身体的人工敏感皮肤开始出现,为了使机器人有用,需要在机器人身体上安装传感器位置。在这项工作中,一个Nao人形机器人在头部、躯干和手臂上安装了压力敏感皮肤。我们通过实验比较了以下皮肤空间校准方法及其组合的准确性和工作量:(i)结合CAD模型和2D皮肤布局,(ii)从图像中重建3D, (iii)使用机器人运动学通过自接触校准皮肤。为了获得taxels在单个皮肤部位上的三维位置,方法(i)和(ii)同样费力,但3D重建更准确。为了将这些3D点云与机器人运动学对齐,采用了两种自接触:皮肤对皮肤和使用自定义末端执行器(手指)。结合三维重建数据,获得了单个传感器半径以下的平均校准误差(2mm)。超过100个躯干taxel位置的显著摄动可以使用自接触校准进行校正,达到约。平均误差3毫米。这项工作不是概念验证,而是大规模部署方法:结果是机器人身体上所有970个taxels的实际空间校准。由于对不同的校准方法分别进行了单独和不同组合的评估,本工作提供了适用于不同传感器阵列空间校准的指导方针。
{"title":"Spatial calibration of whole-body artificial skin on a humanoid robot: comparing self-contact, 3D reconstruction, and CAD-based calibration","authors":"Lukas Rustler, Bohumila Potočná, Michal Polic, K. Štěpánová, M. Hoffmann","doi":"10.1109/HUMANOIDS47582.2021.9555806","DOIUrl":"https://doi.org/10.1109/HUMANOIDS47582.2021.9555806","url":null,"abstract":"Robots were largely missing the sense of touch for decades. As artificial sensitive skins covering large areas of robot bodies are starting to appear, to be useful to the machines, sensor positions on the robot body are needed. In this work, a Nao humanoid robot was retrofitted with pressure-sensitive skin on the head, torso, and arms. We experimentally compare the accuracy and effort associated with the following skin spatial calibration approaches and their combinations: (i) combining CAD models and skin layout in 2D, (ii) 3D reconstruction from images, (iii) using robot kinematics to calibrate skin by self-contact. To acquire 3D positions of taxels on individual skin parts, methods (i) and (ii) were similarly laborious but 3D reconstruction was more accurate. To align these 3D point clouds with the robot kinematics, two variants of self-contact were employed: skin-on-skin and utilization of a custom end effector (finger). In combination with the 3D reconstruction data, mean calibration errors below the radius of individual sensors were achieved (2 mm). Significant perturbation of more than 100 torso taxel positions could be corrected using self-contact calibration, reaching approx. 3 mm mean error. This work is not a proof of concept but deployment of the approaches at scale: the outcome is actual spatial calibration of all 970 taxels on the robot body. As the different calibration approaches are evaluated in isolation as well as in different combinations, this work provides a guideline applicable to spatial calibration of different sensor arrays.","PeriodicalId":320510,"journal":{"name":"2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125826209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2020 IEEE-RAS 20th International Conference on Humanoid Robots (Humanoids)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1