首页 > 最新文献

2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)最新文献

英文 中文
Modelling and Compensation for Transmission Error of Timing Belt in Legged Robots 支腿机器人同步带传动误差的建模与补偿
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354989
Jingcheng Jiang, Yifang Zhang, N. Tsagarakis
The timing belt transmission offers numerous advantages for legged robots, including high efficiency, impact absorption and large range of joint motion. However, the transmission error under high load remains challenging to locomotion control and further applications of belt transmission. Traditional linear models cannot effectively model the belt deformation under a wide range of tension variations due to the nonlinearity. In this paper, we propose a model of the compensation for the belt transmission error based on the pretension and torque of the pully. The adopted approach bypasses the complexity of elaborate physical model derivations, yielding a non-linear model for transmission system errors through straightforward fitting. Based on the proposed model, an error compensation control is investigated and tested with an one-DoF leg prototype of legged robot. The alignment between experimental results and theoretical analysis demonstrates the accuracy of the modeling and the effectiveness of the error compensation control method. The proposed model provides a convenient and straightforward solution to effectively compensate for the belt transmission errors in legged robots.
同步带传动为腿式机器人提供了众多优势,包括效率高、冲击吸收能力强和关节运动范围大。然而,高负载下的传动误差仍然是运动控制和皮带传动进一步应用的挑战。由于非线性的原因,传统的线性模型无法有效模拟大范围张力变化下的皮带变形。本文提出了一种基于滑轮预张力和扭矩的皮带传动误差补偿模型。所采用的方法绕过了复杂的物理模型推导,通过直接拟合得出了传输系统误差的非线性模型。根据所提出的模型,研究了一种误差补偿控制,并用一个单DoF 腿部机器人原型进行了测试。实验结果与理论分析之间的一致性证明了建模的准确性和误差补偿控制方法的有效性。所提出的模型为有效补偿腿式机器人的皮带传动误差提供了方便、直接的解决方案。
{"title":"Modelling and Compensation for Transmission Error of Timing Belt in Legged Robots","authors":"Jingcheng Jiang, Yifang Zhang, N. Tsagarakis","doi":"10.1109/ROBIO58561.2023.10354989","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354989","url":null,"abstract":"The timing belt transmission offers numerous advantages for legged robots, including high efficiency, impact absorption and large range of joint motion. However, the transmission error under high load remains challenging to locomotion control and further applications of belt transmission. Traditional linear models cannot effectively model the belt deformation under a wide range of tension variations due to the nonlinearity. In this paper, we propose a model of the compensation for the belt transmission error based on the pretension and torque of the pully. The adopted approach bypasses the complexity of elaborate physical model derivations, yielding a non-linear model for transmission system errors through straightforward fitting. Based on the proposed model, an error compensation control is investigated and tested with an one-DoF leg prototype of legged robot. The alignment between experimental results and theoretical analysis demonstrates the accuracy of the modeling and the effectiveness of the error compensation control method. The proposed model provides a convenient and straightforward solution to effectively compensate for the belt transmission errors in legged robots.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"60 9","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simplified Modeling of Hybrid Soft Robots with Constant Stiffness Assumption 具有恒定刚度假设的混合软机器人简化模型
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10355009
Umer Huzaifa, Dimuthu D. K. Arachchige, Muhammad Aneeq uz Zaman, Usman Syed
Soft robots have shown their value as alternatives or supplements to rigid robots in applications like search and rescue missions and complex precise tasks. Their ability to take on various shapes and apply adaptable force gives them an advantage over stiff robots. However, sometimes their soft structure doesn’t offer enough force for the task. Hybrid soft robots (HSRs) combine a soft body with a stronger backbone to handle tasks needing more strength. This rigid part lets us use rigid body dynamics to estimate HSR behavior. Here, we introduce a simplified N-link rigid body dynamic model with constant stiffness to mimic HSR behavior. While soft robots’ stiffness varies, the backbone in HSRs makes it similar to having constant stiffness. Comparing experiments supports the effectiveness of our N-link model for HSR modeling.
在搜救任务和复杂精密任务等应用中,软体机器人已显示出其作为刚性机器人替代品或补充的价值。与刚性机器人相比,软体机器人能够呈现出各种形状,并能施加适应性强的力,这使它们更具优势。然而,有时它们的软结构并不能为任务提供足够的力。混合软体机器人(HSR)结合了软体和更坚固的骨架,以处理需要更多力量的任务。我们可以利用刚体动力学来估计混合软体机器人的行为。在这里,我们引入了一个简化的具有恒定刚度的 N 连杆刚体动力学模型来模拟 HSR 的行为。软体机器人的刚度是变化的,而 HSR 中的骨架使其类似于具有恒定刚度。对比实验证明了我们的 N-连杆模型在 HSR 建模中的有效性。
{"title":"Simplified Modeling of Hybrid Soft Robots with Constant Stiffness Assumption","authors":"Umer Huzaifa, Dimuthu D. K. Arachchige, Muhammad Aneeq uz Zaman, Usman Syed","doi":"10.1109/ROBIO58561.2023.10355009","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10355009","url":null,"abstract":"Soft robots have shown their value as alternatives or supplements to rigid robots in applications like search and rescue missions and complex precise tasks. Their ability to take on various shapes and apply adaptable force gives them an advantage over stiff robots. However, sometimes their soft structure doesn’t offer enough force for the task. Hybrid soft robots (HSRs) combine a soft body with a stronger backbone to handle tasks needing more strength. This rigid part lets us use rigid body dynamics to estimate HSR behavior. Here, we introduce a simplified N-link rigid body dynamic model with constant stiffness to mimic HSR behavior. While soft robots’ stiffness varies, the backbone in HSRs makes it similar to having constant stiffness. Comparing experiments supports the effectiveness of our N-link model for HSR modeling.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"57 11","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An improved ORB-GMS image feature extraction and matching algorithm* 改进的 ORB-GMS 图像特征提取和匹配算法*
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10355043
Zhiying Tan, Wenbo Fan, Weifeng Kong, Xu Tao, Linsen Xu, Xiaobin Xu
Feature point extraction and matching is the key technology in object detection and simultaneous localization and mapping (SLAM). Aiming at the problems such as easy redundancy of feature points extracted by traditional ORB algorithm, low matching accuracy of mainstream robust estimation algorithms and low real-time performance, an improved ORB-GMS image feature extraction and matching algorithm is proposed. Firstly, the algorithm uses the gray value of the image to calculate the adaptive extraction threshold of the feature points. Then the image pyramid is constructed according to the image size. The set number of total feature points to be extracted is evenly distributed to each layer image according to the area ratio; Extract feature points from each layer of the image pyramid, and count the extracted feature points from each layer. If the number of feature points extracted from each layer meets the set number of images from each layer, the extraction ends. Then the quadtree algorithm is used to homogenize the feature points. Finally, the network scoring model is optimized from 8 neighborhood to 4 neighborhood, which reduces the computing time. Experimental results show that the matching accuracy of the proposed algorithm is 14% higher than that of the original algorithm, and the running time is 12% lower.
特征点提取与匹配是物体检测和同步定位与映射(SLAM)的关键技术。针对传统 ORB 算法提取的特征点易冗余、主流鲁棒估计算法匹配精度低、实时性差等问题,提出了一种改进的 ORB-GMS 图像特征提取与匹配算法。首先,该算法利用图像的灰度值计算特征点的自适应提取阈值。然后根据图像大小构建图像金字塔。根据面积比将设定的总特征点数平均分配到各层图像中;从图像金字塔的各层提取特征点,并统计各层提取的特征点数。如果每层提取的特征点数量达到设定的每层图像数量,则提取结束。然后使用四叉树算法对特征点进行均匀化处理。最后,网络评分模型从 8 个邻域优化为 4 个邻域,从而减少了计算时间。实验结果表明,建议算法的匹配准确率比原始算法高 14%,运行时间减少 12%。
{"title":"An improved ORB-GMS image feature extraction and matching algorithm*","authors":"Zhiying Tan, Wenbo Fan, Weifeng Kong, Xu Tao, Linsen Xu, Xiaobin Xu","doi":"10.1109/ROBIO58561.2023.10355043","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10355043","url":null,"abstract":"Feature point extraction and matching is the key technology in object detection and simultaneous localization and mapping (SLAM). Aiming at the problems such as easy redundancy of feature points extracted by traditional ORB algorithm, low matching accuracy of mainstream robust estimation algorithms and low real-time performance, an improved ORB-GMS image feature extraction and matching algorithm is proposed. Firstly, the algorithm uses the gray value of the image to calculate the adaptive extraction threshold of the feature points. Then the image pyramid is constructed according to the image size. The set number of total feature points to be extracted is evenly distributed to each layer image according to the area ratio; Extract feature points from each layer of the image pyramid, and count the extracted feature points from each layer. If the number of feature points extracted from each layer meets the set number of images from each layer, the extraction ends. Then the quadtree algorithm is used to homogenize the feature points. Finally, the network scoring model is optimized from 8 neighborhood to 4 neighborhood, which reduces the computing time. Experimental results show that the matching accuracy of the proposed algorithm is 14% higher than that of the original algorithm, and the running time is 12% lower.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"78 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Path Planning for Robotic Arm Based on Reinforcement Learning under the Train 基于列车下强化学习的机械臂路径规划
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354783
Guanhao Xie, Duo Zhao, Qichao Tang, Muhua Zhang, Wenjie Zhao, Yewen Wang
Due to the widespread use of robotic arms, path planning for them has always been a hot research topic. However, traditional path planning algorithms struggle to ensure low disparity in each path, making them unsuitable for operation scenarios with high safety requirements, such as the undercarriage environment of train. A Reinforcement Learning (RL) framework is proposed in this article to address this challenge. The Proximal Policy Optimization (PPO) algorithm has been enhanced, resulting in a variant referred to as Randomized PPO (RPPO), which demonstrates slightly accelerated convergence speed. Additionally, a reward model is proposed to assist the agent in escaping local optima. For modeling application environment, lidar is employed for obtaining obstacle point cloud information, which is then transformed into an octree grid map for maneuvering the robotic arm to avoid obstacles. According to the experimental results, the paths planned by our system are superior to those of RRT* in terms of both average length and standard deviation, and RPPO exhibits better convergence speed and path standard deviation compared to PPO.
由于机械臂的广泛应用,其路径规划一直是热门研究课题。然而,传统的路径规划算法难以确保每条路径的低差异,因此不适合安全要求较高的操作场景,如列车底盘环境。本文提出了一种强化学习(RL)框架来应对这一挑战。本文对近端策略优化(PPO)算法进行了改进,形成了一种称为随机 PPO(RPPO)的变体,其收敛速度略有加快。此外,还提出了一个奖励模型,以帮助代理摆脱局部最优状态。在模拟应用环境时,采用激光雷达获取障碍物点云信息,然后将其转换成八叉网格图,用于操纵机械臂避开障碍物。实验结果表明,我们的系统规划的路径在平均长度和标准偏差方面都优于 RRT*,与 PPO 相比,RPPO 表现出更好的收敛速度和路径标准偏差。
{"title":"Path Planning for Robotic Arm Based on Reinforcement Learning under the Train","authors":"Guanhao Xie, Duo Zhao, Qichao Tang, Muhua Zhang, Wenjie Zhao, Yewen Wang","doi":"10.1109/ROBIO58561.2023.10354783","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354783","url":null,"abstract":"Due to the widespread use of robotic arms, path planning for them has always been a hot research topic. However, traditional path planning algorithms struggle to ensure low disparity in each path, making them unsuitable for operation scenarios with high safety requirements, such as the undercarriage environment of train. A Reinforcement Learning (RL) framework is proposed in this article to address this challenge. The Proximal Policy Optimization (PPO) algorithm has been enhanced, resulting in a variant referred to as Randomized PPO (RPPO), which demonstrates slightly accelerated convergence speed. Additionally, a reward model is proposed to assist the agent in escaping local optima. For modeling application environment, lidar is employed for obtaining obstacle point cloud information, which is then transformed into an octree grid map for maneuvering the robotic arm to avoid obstacles. According to the experimental results, the paths planned by our system are superior to those of RRT* in terms of both average length and standard deviation, and RPPO exhibits better convergence speed and path standard deviation compared to PPO.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"34 12","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Deformation for Self-balancing Lower Limb Exoskeleton Only Using Force/Torque Sensors 仅使用力/扭矩传感器估算自平衡下肢外骨骼的形变
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354999
Ziqiang Chen, Ming Yang, Feng Li, Wentao Li, Jinke Li, Dingkui Tian, Jianquan Sun, Yong He, Xinyu Wu
This paper presents a general estimation method of deformation for the self-balancing lower limb exoskeleton (SBLLE). In particular, we propose a Bi-LSTM deformation estimator (BLDE) to predict and compensate for the deformation of SBLLE based on the current force and torque data measured by force/torque (F/T) sensors. First, we choose four movements including squatting down and up, waist twisting, left foot lifting, and right foot lifting to mimic the constituent action of walking motion. The deformation data set is obtained through the motion capture analysis system and offline planning trajectories, and the relative F/T data set is obtained by the F/T sensors embedded in the feet of SBLLE. Second, the BiLSTM network is trained to learn the relationship between the deformation and F/T and verified on the test set. After that, BLDE is added to the control system of SBLLE to estimate and compensate for the deformation. Finally, four same movements and the walking experiment are conducted on the exoskeleton AutoLEE-G2 with BLDE. The experimental results have proven that BLDE can predict and compensate for deformation by only using F/T sensors.
本文介绍了自平衡下肢外骨骼(SBLLE)的一般变形估计方法。其中,我们提出了一种 Bi-LSTM 形变估算器(BLDE),根据力/力矩(F/T)传感器测量到的当前力和力矩数据来预测和补偿 SBLLE 的形变。首先,我们选择了四个动作,包括下蹲、上蹲、扭腰、左脚抬起和右脚抬起,以模拟行走运动的组成动作。变形数据集通过运动捕捉分析系统和离线规划轨迹获得,相对 F/T 数据集通过嵌入 SBLLE 脚部的 F/T 传感器获得。其次,训练 BiLSTM 网络以学习变形与 F/T 之间的关系,并在测试集上进行验证。然后,将 BLDE 添加到 SBLLE 的控制系统中,以估计和补偿形变。最后,在带有 BLDE 的外骨骼 AutoLEE-G2 上进行了四个相同的动作和行走实验。实验结果证明,仅使用 F/T 传感器,BLDE 就能预测和补偿形变。
{"title":"Estimation of Deformation for Self-balancing Lower Limb Exoskeleton Only Using Force/Torque Sensors","authors":"Ziqiang Chen, Ming Yang, Feng Li, Wentao Li, Jinke Li, Dingkui Tian, Jianquan Sun, Yong He, Xinyu Wu","doi":"10.1109/ROBIO58561.2023.10354999","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354999","url":null,"abstract":"This paper presents a general estimation method of deformation for the self-balancing lower limb exoskeleton (SBLLE). In particular, we propose a Bi-LSTM deformation estimator (BLDE) to predict and compensate for the deformation of SBLLE based on the current force and torque data measured by force/torque (F/T) sensors. First, we choose four movements including squatting down and up, waist twisting, left foot lifting, and right foot lifting to mimic the constituent action of walking motion. The deformation data set is obtained through the motion capture analysis system and offline planning trajectories, and the relative F/T data set is obtained by the F/T sensors embedded in the feet of SBLLE. Second, the BiLSTM network is trained to learn the relationship between the deformation and F/T and verified on the test set. After that, BLDE is added to the control system of SBLLE to estimate and compensate for the deformation. Finally, four same movements and the walking experiment are conducted on the exoskeleton AutoLEE-G2 with BLDE. The experimental results have proven that BLDE can predict and compensate for deformation by only using F/T sensors.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"87 6","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Control System for Reach-to-Grasp Movement of a 7-DOF Robotic Arm Using Object Pose Estimation with an RGB Camera 利用 RGB 摄像机进行物体姿态估计的 7-DOF 机械臂伸抓运动自动控制系统
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354531
Shuting Bai, Jiazhen Guo, Yinlai Jiang, Hiroshi Yokoi, Shunta Togo
In this study, we develop an automatic control system to perform the reach-to-grasp movement of a 7-DOF (Degrees of Freedom) robotic arm that has the same DOFs as a human arm, and an end-effector with the same shape as a human hand. The 6-DOF pose of the object to be grasped is estimated in real time only from RGB images using a neural network based object pose estimation model. Based on this information, motion planning is performed to automatically control the reach-to-grasp movement of the robotic arm. In the evaluation experiment, the 7-DOF robotic arm performs reach-to-grasp movements for a household object in different poses using the developed control system. The results show that the control system developed in this study can automatically control the reach-to-grasp movement to an object in a certain arbitrary pose.
在本研究中,我们开发了一种自动控制系统,用于执行 7-DOF (自由度)机械臂的伸抓运动,该机械臂具有与人类手臂相同的 DOF,其末端执行器具有与人类手部相同的形状。要抓取的物体的 6-DOF 姿态仅通过基于神经网络的物体姿态估计模型从 RGB 图像中进行实时估计。在此基础上进行运动规划,自动控制机械臂的伸抓运动。在评估实验中,7-DOF 机械臂利用所开发的控制系统以不同的姿势对一个家用物品进行了伸抓运动。结果表明,本研究中开发的控制系统可以自动控制机械臂以某一任意姿势对物体进行伸抓运动。
{"title":"Automatic Control System for Reach-to-Grasp Movement of a 7-DOF Robotic Arm Using Object Pose Estimation with an RGB Camera","authors":"Shuting Bai, Jiazhen Guo, Yinlai Jiang, Hiroshi Yokoi, Shunta Togo","doi":"10.1109/ROBIO58561.2023.10354531","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354531","url":null,"abstract":"In this study, we develop an automatic control system to perform the reach-to-grasp movement of a 7-DOF (Degrees of Freedom) robotic arm that has the same DOFs as a human arm, and an end-effector with the same shape as a human hand. The 6-DOF pose of the object to be grasped is estimated in real time only from RGB images using a neural network based object pose estimation model. Based on this information, motion planning is performed to automatically control the reach-to-grasp movement of the robotic arm. In the evaluation experiment, the 7-DOF robotic arm performs reach-to-grasp movements for a household object in different poses using the developed control system. The results show that the control system developed in this study can automatically control the reach-to-grasp movement to an object in a certain arbitrary pose.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"13 2","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoupled Control of Bipedal Locomotion Based on HZD and H-LIP 基于 HZD 和 H-LIP 的双足运动解耦控制
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354624
Yinong Ye, Yongming Yue, Wei Gao, Shiwu Zhang
The walking control of bipedal robots poses challenges due to inherent coupling among the robot’s degrees of freedom. This paper introduces an approach to address this challenge by using decoupled control in the sagittal and frontal planes. The proposed control method takes advantage of Hybrid Zero Dynamics and Hybrid-Linear Inverted Pendulum for sagittal and frontal plane dynamics, respectively. The hybrid controller is successfully validated on a bipedal robot RobBIE, whose torso inertia is relatively high and if not adequately controlled can easily violate the point mass assumption in many reduced-order model based walking controllers developed previously. With the help of full-model based Hybrid Zero Dynamics, the robot can achieve stable walking behaviors at different velocities and adapt to various terrains and even moderate disturbances.
由于机器人自由度之间固有的耦合关系,双足机器人的行走控制面临挑战。本文介绍了一种通过在矢状面和正面使用解耦控制来应对这一挑战的方法。所提出的控制方法利用了混合零动力学和混合线性倒立摆的优势,分别用于矢状面和正面的动力学。混合控制器在双足机器人 RobBIE 上得到了成功验证,该机器人的躯干惯性相对较大,如果控制不当,很容易违反之前开发的许多基于模型的减阶行走控制器中的点质量假设。在基于全模型的混合零动力学的帮助下,机器人可以在不同速度下实现稳定的行走行为,并适应各种地形,甚至是中等程度的干扰。
{"title":"Decoupled Control of Bipedal Locomotion Based on HZD and H-LIP","authors":"Yinong Ye, Yongming Yue, Wei Gao, Shiwu Zhang","doi":"10.1109/ROBIO58561.2023.10354624","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354624","url":null,"abstract":"The walking control of bipedal robots poses challenges due to inherent coupling among the robot’s degrees of freedom. This paper introduces an approach to address this challenge by using decoupled control in the sagittal and frontal planes. The proposed control method takes advantage of Hybrid Zero Dynamics and Hybrid-Linear Inverted Pendulum for sagittal and frontal plane dynamics, respectively. The hybrid controller is successfully validated on a bipedal robot RobBIE, whose torso inertia is relatively high and if not adequately controlled can easily violate the point mass assumption in many reduced-order model based walking controllers developed previously. With the help of full-model based Hybrid Zero Dynamics, the robot can achieve stable walking behaviors at different velocities and adapt to various terrains and even moderate disturbances.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"109 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Quick Means for the Burnt Skin Area Calculation via Multiple-view Structured Light Sensors 通过多视角结构光传感器计算烧伤皮肤面积的快速方法
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354748
Di Wu, Yuping Ye, Feifei Gu, Zhan Song
With the fast development of computer vision and artificial intelligence, many technologies from these fields have been introduced to the medical domain. Accurate estimation of burnt skin area is crucial for treatment plan selection and prognostic decision-making. However, state-of-art estimation of burnt skin area exhibits inadequate accuracy and acquisition efficiency. In this paper, a burnt skin acquisition system based on the infrared structured light 3D imaging method is developed. To accurately segment the burnt skin point cloud from the raw point cloud acquired by the proposed system, we employ the Segment Anything Model (SAM). Subsequently, the point clouds segmented from different views are registered using pre-calibrated parameters. Moreover, the surface reconstruction algorithm is employed to generate triangular meshes. Finally, we calculate the area of all the triangular mesh facets to represent the area of burnt skin. Several experiments were conducted to demonstrate the accuracy of the proposed method.
随着计算机视觉和人工智能的快速发展,这些领域的许多技术已被引入医疗领域。烧伤皮肤面积的精确估算对于治疗方案的选择和预后决策至关重要。然而,最先进的烧伤皮肤面积估算技术在准确性和采集效率方面都存在不足。本文开发了一种基于红外结构光三维成像方法的烧伤皮肤采集系统。为了从该系统获取的原始点云中准确分割出烧伤皮肤点云,我们采用了 "任意分割模型"(SAM)。随后,使用预先校准的参数对从不同视角分割的点云进行注册。此外,我们还采用曲面重建算法生成三角形网格。最后,我们计算所有三角形网格面的面积,以表示烧伤皮肤的面积。为了证明所提方法的准确性,我们进行了多次实验。
{"title":"A Quick Means for the Burnt Skin Area Calculation via Multiple-view Structured Light Sensors","authors":"Di Wu, Yuping Ye, Feifei Gu, Zhan Song","doi":"10.1109/ROBIO58561.2023.10354748","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354748","url":null,"abstract":"With the fast development of computer vision and artificial intelligence, many technologies from these fields have been introduced to the medical domain. Accurate estimation of burnt skin area is crucial for treatment plan selection and prognostic decision-making. However, state-of-art estimation of burnt skin area exhibits inadequate accuracy and acquisition efficiency. In this paper, a burnt skin acquisition system based on the infrared structured light 3D imaging method is developed. To accurately segment the burnt skin point cloud from the raw point cloud acquired by the proposed system, we employ the Segment Anything Model (SAM). Subsequently, the point clouds segmented from different views are registered using pre-calibrated parameters. Moreover, the surface reconstruction algorithm is employed to generate triangular meshes. Finally, we calculate the area of all the triangular mesh facets to represent the area of burnt skin. Several experiments were conducted to demonstrate the accuracy of the proposed method.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"76 5","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Semantic Segmentation for Grape Bunch Point Cloud Based on Feature Enhancement 基于特征增强的葡萄串点云三维语义分割
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354793
Jiangtao Luo, Dongbo Zhang, Tao Yi
As a representative bunch-type fruit,the collision-free and undamaged harvesting of grapes is of great significance. To obtain accurate 3D spatial semantic information,this paper proposes a method for multi-feature enhanced semantic segmentation model based on Mask R-CNN and PointNet++. Firstly, a depth camera is used to obtain RGBD images. The RGB images are then inputted into the Mask-RCNN network for fast detection of grape bunches. The color and depth information are fused and transformed into point cloud data, followed by the estimation of normal vectors. Finally, the nine-dimensional point cloud,which include spatial location, color information, and normal vectors, are inputted into the improved PointNet++ network to achieve semantic segmentation of grape bunches, peduncles, and leaves. This process obtains the extraction of spatial semantic information from the surrounding area of the bunches. The experimental results show that by incorporating normal vector and color features, the overall accuracy of point cloud segmentation increases to 93.7%, with a mean accuracy of 81.8%. This represents a significant improvement of 12.1% and 13.5% compared to using only positional features. The results demonstrate that the model method presented in this paper can effectively provide precise 3D semantic information to the robot while ensuring both speed and accuracy. This lays the groundwork for subsequent collision-free and damage-free picking.
作为一种具有代表性的串状水果,葡萄的无碰撞、无损伤采收具有重要意义。为了获得准确的三维空间语义信息,本文提出了一种基于掩膜 R-CNN 和 PointNet++ 的多特征增强语义分割模型方法。首先,使用深度摄像头获取 RGBD 图像。然后将 RGB 图像输入 Mask-RCNN 网络,以快速检测葡萄串。将颜色和深度信息融合并转换为点云数据,然后估算法向量。最后,将包含空间位置、颜色信息和法向量的九维点云输入改进的 PointNet++ 网络,实现对葡萄串、葡萄梗和葡萄叶的语义分割。这一过程可从葡萄串周围区域提取空间语义信息。实验结果表明,加入法向量和颜色特征后,点云分割的整体准确率提高到 93.7%,平均准确率为 81.8%。与仅使用位置特征相比,分别提高了 12.1% 和 13.5%。结果表明,本文介绍的模型方法可以有效地为机器人提供精确的三维语义信息,同时确保速度和精度。这为后续的无碰撞和无损坏拣选奠定了基础。
{"title":"3D Semantic Segmentation for Grape Bunch Point Cloud Based on Feature Enhancement","authors":"Jiangtao Luo, Dongbo Zhang, Tao Yi","doi":"10.1109/ROBIO58561.2023.10354793","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354793","url":null,"abstract":"As a representative bunch-type fruit,the collision-free and undamaged harvesting of grapes is of great significance. To obtain accurate 3D spatial semantic information,this paper proposes a method for multi-feature enhanced semantic segmentation model based on Mask R-CNN and PointNet++. Firstly, a depth camera is used to obtain RGBD images. The RGB images are then inputted into the Mask-RCNN network for fast detection of grape bunches. The color and depth information are fused and transformed into point cloud data, followed by the estimation of normal vectors. Finally, the nine-dimensional point cloud,which include spatial location, color information, and normal vectors, are inputted into the improved PointNet++ network to achieve semantic segmentation of grape bunches, peduncles, and leaves. This process obtains the extraction of spatial semantic information from the surrounding area of the bunches. The experimental results show that by incorporating normal vector and color features, the overall accuracy of point cloud segmentation increases to 93.7%, with a mean accuracy of 81.8%. This represents a significant improvement of 12.1% and 13.5% compared to using only positional features. The results demonstrate that the model method presented in this paper can effectively provide precise 3D semantic information to the robot while ensuring both speed and accuracy. This lays the groundwork for subsequent collision-free and damage-free picking.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"63 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech-image based Multimodal AI Interaction for Scrub Nurse Assistance in the Operating Room 基于语音图像的多模态人工智能交互,为手术室内的洗刷护士提供帮助
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354726
W. Ng, Han Yi Wang, Zheng Li
With the increasing surgical need in our aging society, there is a lack of experienced surgical assistants, such as scrub nurses. To facilitate the training of junior scrub nurses and to reduce human errors, e.g., missing surgical items, we develop a speech-image based multimodal AI framework to assist scrub nurses in the operating room. The proposed framework allows real-time instrument type identification and instance detection, which enables junior scrub nurses to become more familiar with the surgical instruments and guides them throughout the surgical procedure. We construct an ex-vivo video-assisted thorascopic surgery dataset and benchmark it on common object detection models, reaching an average precision of 98.5% and an average recall of 98.9% on the state-of-the-art YOLO-v7. Additionally, we implement an oriented bounding box version of YOLO-v7 to address the undesired bounding box suppression in instrument crossing over. By achieving an average precision of 95.6% and an average recall of 97.4%, we improve the average recall by up to 9.2% compared to the previous oriented bounding box version of YOLO-v5. To minimize distraction during surgery, we adopt a deep learning-based automatic speech recognition model to allow surgeons to concentrate on the procedure. Our physical demonstration substantiates the feasibility of the proposed framework in providing real-time guidance and assistance for scrub nurses.
随着老龄化社会对外科手术需求的不断增加,缺乏有经验的外科手术助手,如手术护士。为了促进对初级刷手护士的培训并减少人为错误(如遗漏手术物品),我们开发了一个基于语音图像的多模态人工智能框架,以协助手术室中的刷手护士。所提出的框架可以实时识别器械类型并进行实例检测,从而使初级擦洗护士更加熟悉手术器械,并在整个手术过程中为她们提供指导。我们构建了一个体外视频辅助胸腔镜手术数据集,并以常见的物体检测模型为基准,在最先进的 YOLO-v7 上达到了 98.5% 的平均精确度和 98.9% 的平均召回率。此外,我们还实现了 YOLO-v7 的定向边界框版本,以解决器械交叉时不希望出现的边界框抑制问题。通过实现 95.6% 的平均精确度和 97.4% 的平均召回率,我们将平均召回率提高了 9.2%,而之前的定向边界框版本 YOLO-v5。为了尽量减少手术过程中的分心,我们采用了基于深度学习的自动语音识别模型,让外科医生能够专注于手术过程。我们的实际演示证实了所提出的框架在为擦洗护士提供实时指导和帮助方面的可行性。
{"title":"Speech-image based Multimodal AI Interaction for Scrub Nurse Assistance in the Operating Room","authors":"W. Ng, Han Yi Wang, Zheng Li","doi":"10.1109/ROBIO58561.2023.10354726","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354726","url":null,"abstract":"With the increasing surgical need in our aging society, there is a lack of experienced surgical assistants, such as scrub nurses. To facilitate the training of junior scrub nurses and to reduce human errors, e.g., missing surgical items, we develop a speech-image based multimodal AI framework to assist scrub nurses in the operating room. The proposed framework allows real-time instrument type identification and instance detection, which enables junior scrub nurses to become more familiar with the surgical instruments and guides them throughout the surgical procedure. We construct an ex-vivo video-assisted thorascopic surgery dataset and benchmark it on common object detection models, reaching an average precision of 98.5% and an average recall of 98.9% on the state-of-the-art YOLO-v7. Additionally, we implement an oriented bounding box version of YOLO-v7 to address the undesired bounding box suppression in instrument crossing over. By achieving an average precision of 95.6% and an average recall of 97.4%, we improve the average recall by up to 9.2% compared to the previous oriented bounding box version of YOLO-v5. To minimize distraction during surgery, we adopt a deep learning-based automatic speech recognition model to allow surgeons to concentrate on the procedure. Our physical demonstration substantiates the feasibility of the proposed framework in providing real-time guidance and assistance for scrub nurses.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"73 2","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1