首页 > 最新文献

2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)最新文献

英文 中文
3D Semantic Segmentation for Grape Bunch Point Cloud Based on Feature Enhancement 基于特征增强的葡萄串点云三维语义分割
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354793
Jiangtao Luo, Dongbo Zhang, Tao Yi
As a representative bunch-type fruit,the collision-free and undamaged harvesting of grapes is of great significance. To obtain accurate 3D spatial semantic information,this paper proposes a method for multi-feature enhanced semantic segmentation model based on Mask R-CNN and PointNet++. Firstly, a depth camera is used to obtain RGBD images. The RGB images are then inputted into the Mask-RCNN network for fast detection of grape bunches. The color and depth information are fused and transformed into point cloud data, followed by the estimation of normal vectors. Finally, the nine-dimensional point cloud,which include spatial location, color information, and normal vectors, are inputted into the improved PointNet++ network to achieve semantic segmentation of grape bunches, peduncles, and leaves. This process obtains the extraction of spatial semantic information from the surrounding area of the bunches. The experimental results show that by incorporating normal vector and color features, the overall accuracy of point cloud segmentation increases to 93.7%, with a mean accuracy of 81.8%. This represents a significant improvement of 12.1% and 13.5% compared to using only positional features. The results demonstrate that the model method presented in this paper can effectively provide precise 3D semantic information to the robot while ensuring both speed and accuracy. This lays the groundwork for subsequent collision-free and damage-free picking.
作为一种具有代表性的串状水果,葡萄的无碰撞、无损伤采收具有重要意义。为了获得准确的三维空间语义信息,本文提出了一种基于掩膜 R-CNN 和 PointNet++ 的多特征增强语义分割模型方法。首先,使用深度摄像头获取 RGBD 图像。然后将 RGB 图像输入 Mask-RCNN 网络,以快速检测葡萄串。将颜色和深度信息融合并转换为点云数据,然后估算法向量。最后,将包含空间位置、颜色信息和法向量的九维点云输入改进的 PointNet++ 网络,实现对葡萄串、葡萄梗和葡萄叶的语义分割。这一过程可从葡萄串周围区域提取空间语义信息。实验结果表明,加入法向量和颜色特征后,点云分割的整体准确率提高到 93.7%,平均准确率为 81.8%。与仅使用位置特征相比,分别提高了 12.1% 和 13.5%。结果表明,本文介绍的模型方法可以有效地为机器人提供精确的三维语义信息,同时确保速度和精度。这为后续的无碰撞和无损坏拣选奠定了基础。
{"title":"3D Semantic Segmentation for Grape Bunch Point Cloud Based on Feature Enhancement","authors":"Jiangtao Luo, Dongbo Zhang, Tao Yi","doi":"10.1109/ROBIO58561.2023.10354793","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354793","url":null,"abstract":"As a representative bunch-type fruit,the collision-free and undamaged harvesting of grapes is of great significance. To obtain accurate 3D spatial semantic information,this paper proposes a method for multi-feature enhanced semantic segmentation model based on Mask R-CNN and PointNet++. Firstly, a depth camera is used to obtain RGBD images. The RGB images are then inputted into the Mask-RCNN network for fast detection of grape bunches. The color and depth information are fused and transformed into point cloud data, followed by the estimation of normal vectors. Finally, the nine-dimensional point cloud,which include spatial location, color information, and normal vectors, are inputted into the improved PointNet++ network to achieve semantic segmentation of grape bunches, peduncles, and leaves. This process obtains the extraction of spatial semantic information from the surrounding area of the bunches. The experimental results show that by incorporating normal vector and color features, the overall accuracy of point cloud segmentation increases to 93.7%, with a mean accuracy of 81.8%. This represents a significant improvement of 12.1% and 13.5% compared to using only positional features. The results demonstrate that the model method presented in this paper can effectively provide precise 3D semantic information to the robot while ensuring both speed and accuracy. This lays the groundwork for subsequent collision-free and damage-free picking.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"63 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speech-image based Multimodal AI Interaction for Scrub Nurse Assistance in the Operating Room 基于语音图像的多模态人工智能交互,为手术室内的洗刷护士提供帮助
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354726
W. Ng, Han Yi Wang, Zheng Li
With the increasing surgical need in our aging society, there is a lack of experienced surgical assistants, such as scrub nurses. To facilitate the training of junior scrub nurses and to reduce human errors, e.g., missing surgical items, we develop a speech-image based multimodal AI framework to assist scrub nurses in the operating room. The proposed framework allows real-time instrument type identification and instance detection, which enables junior scrub nurses to become more familiar with the surgical instruments and guides them throughout the surgical procedure. We construct an ex-vivo video-assisted thorascopic surgery dataset and benchmark it on common object detection models, reaching an average precision of 98.5% and an average recall of 98.9% on the state-of-the-art YOLO-v7. Additionally, we implement an oriented bounding box version of YOLO-v7 to address the undesired bounding box suppression in instrument crossing over. By achieving an average precision of 95.6% and an average recall of 97.4%, we improve the average recall by up to 9.2% compared to the previous oriented bounding box version of YOLO-v5. To minimize distraction during surgery, we adopt a deep learning-based automatic speech recognition model to allow surgeons to concentrate on the procedure. Our physical demonstration substantiates the feasibility of the proposed framework in providing real-time guidance and assistance for scrub nurses.
随着老龄化社会对外科手术需求的不断增加,缺乏有经验的外科手术助手,如手术护士。为了促进对初级刷手护士的培训并减少人为错误(如遗漏手术物品),我们开发了一个基于语音图像的多模态人工智能框架,以协助手术室中的刷手护士。所提出的框架可以实时识别器械类型并进行实例检测,从而使初级擦洗护士更加熟悉手术器械,并在整个手术过程中为她们提供指导。我们构建了一个体外视频辅助胸腔镜手术数据集,并以常见的物体检测模型为基准,在最先进的 YOLO-v7 上达到了 98.5% 的平均精确度和 98.9% 的平均召回率。此外,我们还实现了 YOLO-v7 的定向边界框版本,以解决器械交叉时不希望出现的边界框抑制问题。通过实现 95.6% 的平均精确度和 97.4% 的平均召回率,我们将平均召回率提高了 9.2%,而之前的定向边界框版本 YOLO-v5。为了尽量减少手术过程中的分心,我们采用了基于深度学习的自动语音识别模型,让外科医生能够专注于手术过程。我们的实际演示证实了所提出的框架在为擦洗护士提供实时指导和帮助方面的可行性。
{"title":"Speech-image based Multimodal AI Interaction for Scrub Nurse Assistance in the Operating Room","authors":"W. Ng, Han Yi Wang, Zheng Li","doi":"10.1109/ROBIO58561.2023.10354726","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354726","url":null,"abstract":"With the increasing surgical need in our aging society, there is a lack of experienced surgical assistants, such as scrub nurses. To facilitate the training of junior scrub nurses and to reduce human errors, e.g., missing surgical items, we develop a speech-image based multimodal AI framework to assist scrub nurses in the operating room. The proposed framework allows real-time instrument type identification and instance detection, which enables junior scrub nurses to become more familiar with the surgical instruments and guides them throughout the surgical procedure. We construct an ex-vivo video-assisted thorascopic surgery dataset and benchmark it on common object detection models, reaching an average precision of 98.5% and an average recall of 98.9% on the state-of-the-art YOLO-v7. Additionally, we implement an oriented bounding box version of YOLO-v7 to address the undesired bounding box suppression in instrument crossing over. By achieving an average precision of 95.6% and an average recall of 97.4%, we improve the average recall by up to 9.2% compared to the previous oriented bounding box version of YOLO-v5. To minimize distraction during surgery, we adopt a deep learning-based automatic speech recognition model to allow surgeons to concentrate on the procedure. Our physical demonstration substantiates the feasibility of the proposed framework in providing real-time guidance and assistance for scrub nurses.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"73 2","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast Visual Servo for Rapidly Seafood Capturing of Underwater Delta Robots 用于水下三角洲机器人快速捕捉海鲜的快速视觉伺服系统
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354662
Maosheng Yang, Lin Xiao, Ce Chen, Yangyi Hu, Yi Sun, Huayan Pu, Wenchuan Jia
In this paper, we propose and design an underwater delta robot for fast seafood grasping. First, the hardware structure of the robot is described in detail. After that, a visual servo control method for fast catching of this underwater delta robot is proposed. The method is able to generate real-time radial trajectories so as to realize the catching based on the swaying of the robot body as well as the movement of the object to be caught. In the actual grasping test, the moving platform and the slave arm can occlude to the target resulting in the loss of target position information. Therefore, we propose a position prediction method to predict the position of the grasped object when occlusion occurs, thus improving the success rate of grasping and ensuring a smooth robot trajectory. Finally, several land and underwater experiments were conducted with good results, which verified the feasibility of the robot structure and algorithm.
本文提出并设计了一种用于快速抓取海鲜的水下三角洲机器人。首先,详细介绍了机器人的硬件结构。然后,提出了一种用于该水下三角洲机器人快速抓取的视觉伺服控制方法。该方法能够实时生成径向轨迹,从而根据机器人身体的摇摆和被抓物体的运动来实现抓取。在实际抓取测试中,移动平台和从动臂可能会遮挡目标,导致目标位置信息丢失。因此,我们提出了一种位置预测方法,以预测发生遮挡时被抓取物体的位置,从而提高抓取成功率,确保机器人轨迹平稳。最后,我们进行了多次陆地和水下实验,取得了良好的效果,验证了机器人结构和算法的可行性。
{"title":"Fast Visual Servo for Rapidly Seafood Capturing of Underwater Delta Robots","authors":"Maosheng Yang, Lin Xiao, Ce Chen, Yangyi Hu, Yi Sun, Huayan Pu, Wenchuan Jia","doi":"10.1109/ROBIO58561.2023.10354662","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354662","url":null,"abstract":"In this paper, we propose and design an underwater delta robot for fast seafood grasping. First, the hardware structure of the robot is described in detail. After that, a visual servo control method for fast catching of this underwater delta robot is proposed. The method is able to generate real-time radial trajectories so as to realize the catching based on the swaying of the robot body as well as the movement of the object to be caught. In the actual grasping test, the moving platform and the slave arm can occlude to the target resulting in the loss of target position information. Therefore, we propose a position prediction method to predict the position of the grasped object when occlusion occurs, thus improving the success rate of grasping and ensuring a smooth robot trajectory. Finally, several land and underwater experiments were conducted with good results, which verified the feasibility of the robot structure and algorithm.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"71 11","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modelling and Compensation for Transmission Error of Timing Belt in Legged Robots 支腿机器人同步带传动误差的建模与补偿
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354989
Jingcheng Jiang, Yifang Zhang, N. Tsagarakis
The timing belt transmission offers numerous advantages for legged robots, including high efficiency, impact absorption and large range of joint motion. However, the transmission error under high load remains challenging to locomotion control and further applications of belt transmission. Traditional linear models cannot effectively model the belt deformation under a wide range of tension variations due to the nonlinearity. In this paper, we propose a model of the compensation for the belt transmission error based on the pretension and torque of the pully. The adopted approach bypasses the complexity of elaborate physical model derivations, yielding a non-linear model for transmission system errors through straightforward fitting. Based on the proposed model, an error compensation control is investigated and tested with an one-DoF leg prototype of legged robot. The alignment between experimental results and theoretical analysis demonstrates the accuracy of the modeling and the effectiveness of the error compensation control method. The proposed model provides a convenient and straightforward solution to effectively compensate for the belt transmission errors in legged robots.
同步带传动为腿式机器人提供了众多优势,包括效率高、冲击吸收能力强和关节运动范围大。然而,高负载下的传动误差仍然是运动控制和皮带传动进一步应用的挑战。由于非线性的原因,传统的线性模型无法有效模拟大范围张力变化下的皮带变形。本文提出了一种基于滑轮预张力和扭矩的皮带传动误差补偿模型。所采用的方法绕过了复杂的物理模型推导,通过直接拟合得出了传输系统误差的非线性模型。根据所提出的模型,研究了一种误差补偿控制,并用一个单DoF 腿部机器人原型进行了测试。实验结果与理论分析之间的一致性证明了建模的准确性和误差补偿控制方法的有效性。所提出的模型为有效补偿腿式机器人的皮带传动误差提供了方便、直接的解决方案。
{"title":"Modelling and Compensation for Transmission Error of Timing Belt in Legged Robots","authors":"Jingcheng Jiang, Yifang Zhang, N. Tsagarakis","doi":"10.1109/ROBIO58561.2023.10354989","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354989","url":null,"abstract":"The timing belt transmission offers numerous advantages for legged robots, including high efficiency, impact absorption and large range of joint motion. However, the transmission error under high load remains challenging to locomotion control and further applications of belt transmission. Traditional linear models cannot effectively model the belt deformation under a wide range of tension variations due to the nonlinearity. In this paper, we propose a model of the compensation for the belt transmission error based on the pretension and torque of the pully. The adopted approach bypasses the complexity of elaborate physical model derivations, yielding a non-linear model for transmission system errors through straightforward fitting. Based on the proposed model, an error compensation control is investigated and tested with an one-DoF leg prototype of legged robot. The alignment between experimental results and theoretical analysis demonstrates the accuracy of the modeling and the effectiveness of the error compensation control method. The proposed model provides a convenient and straightforward solution to effectively compensate for the belt transmission errors in legged robots.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"60 9","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Servoing Using Cosine Similarity Metric 使用余弦相似度量进行视觉伺服
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354973
Wenbo Ning, Yecan Yin, Xiangfei Li, Huan Zhao, Yunfeng Fu, Han Ding
This article presents a new visual servoing method based on cosine similarity metric, which focuses on utilizing cosine distance defined by cosine similarity as the optimization objective of histogram-based direct visual servoing (HDVS) to design the servoing control law. As a more compact global descriptor, the histogram makes direct visual servoing more robust against noise than directly using image intensity. Cosine similarity is the cosine value between two vectors, which has been widely employed to calculate the similarity between multidimensional information. The cosine distance derived from the cosine similarity is more sensitive to the directional difference between the histograms, making the proposed method have a larger convergence rate than the existing Matusita distance-based servoing method. This advantage is verified by simulations, and experiments are conducted on a manipulator to further verify the effectiveness of the proposed method in practical situations.
本文提出了一种基于余弦相似度的新型视觉舵机控制方法,该方法主要利用余弦相似度定义的余弦距离作为基于直方图的直接视觉舵机控制(HDVS)的优化目标,从而设计舵机控制法则。与直接使用图像强度相比,直方图作为一种更紧凑的全局描述符,使直接视觉舵机具有更强的抗噪能力。余弦相似度是两个向量之间的余弦值,被广泛用于计算多维信息之间的相似度。由余弦相似度推导出的余弦距离对直方图之间的方向差异更为敏感,因此与现有的基于 Matusita 距离的伺服方法相比,所提出的方法具有更大的收敛率。模拟验证了这一优势,并在机械手上进行了实验,进一步验证了所提方法在实际情况中的有效性。
{"title":"Visual Servoing Using Cosine Similarity Metric","authors":"Wenbo Ning, Yecan Yin, Xiangfei Li, Huan Zhao, Yunfeng Fu, Han Ding","doi":"10.1109/ROBIO58561.2023.10354973","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354973","url":null,"abstract":"This article presents a new visual servoing method based on cosine similarity metric, which focuses on utilizing cosine distance defined by cosine similarity as the optimization objective of histogram-based direct visual servoing (HDVS) to design the servoing control law. As a more compact global descriptor, the histogram makes direct visual servoing more robust against noise than directly using image intensity. Cosine similarity is the cosine value between two vectors, which has been widely employed to calculate the similarity between multidimensional information. The cosine distance derived from the cosine similarity is more sensitive to the directional difference between the histograms, making the proposed method have a larger convergence rate than the existing Matusita distance-based servoing method. This advantage is verified by simulations, and experiments are conducted on a manipulator to further verify the effectiveness of the proposed method in practical situations.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"70 9","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Enhanced Network Swin-T by CNN on Flow Pattern Recognition for Two-phase Image Dataset with Low Similarity 低相似度两相图像数据集的增强型网络 Swin-T
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354651
Jinsong. Zhang, Deling. Wang, Huadan. Hao, Liangwen. Yan
In the two-phase flow experiments with different conditions of materials and process parameters, the collected image dataset with the low similarity and small amount was difficult for the common deep learning algorithms to achieve a high-precision recognition of flow pattern. due to the low extraction capability of global features. In this article, we proposed a new deep learning algorithm to enhance Swin-T network by CNN which combined the advantages of Swin-T network with Dynamic Region-Aware Convolution. The new algorithm retained the window multi-head self-attention mechanism and added the self-attention adjustment module to enhance the extraction of image features and the convergence speed of network. It significantly improved the recognition accuracy of the different flow patterns in the sharp and blurred images. The enhanced network Swin-T by CNN had the high applicability to the classification of image dataset with low similarity and small amount.
在不同材料条件和工艺参数的两相流实验中,采集到的图像数据集相似度低、数量少,由于全局特征提取能力低,普通深度学习算法难以实现对流动模式的高精度识别。本文结合 Swin-T 网络和动态区域感知卷积的优点,提出了一种新的深度学习算法,通过 CNN 增强 Swin-T 网络。新算法保留了窗口多头自注意机制,并增加了自注意调整模块,提高了图像特征的提取能力和网络的收敛速度。它大大提高了对清晰和模糊图像中不同流动模式的识别准确率。CNN 增强网络 Swin-T 对相似度低、数量少的图像数据集的分类具有很高的适用性。
{"title":"The Enhanced Network Swin-T by CNN on Flow Pattern Recognition for Two-phase Image Dataset with Low Similarity","authors":"Jinsong. Zhang, Deling. Wang, Huadan. Hao, Liangwen. Yan","doi":"10.1109/ROBIO58561.2023.10354651","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354651","url":null,"abstract":"In the two-phase flow experiments with different conditions of materials and process parameters, the collected image dataset with the low similarity and small amount was difficult for the common deep learning algorithms to achieve a high-precision recognition of flow pattern. due to the low extraction capability of global features. In this article, we proposed a new deep learning algorithm to enhance Swin-T network by CNN which combined the advantages of Swin-T network with Dynamic Region-Aware Convolution. The new algorithm retained the window multi-head self-attention mechanism and added the self-attention adjustment module to enhance the extraction of image features and the convergence speed of network. It significantly improved the recognition accuracy of the different flow patterns in the sharp and blurred images. The enhanced network Swin-T by CNN had the high applicability to the classification of image dataset with low similarity and small amount.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"59 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inertia Estimation of Quadruped Robot under Load and Its Walking Control Strategy in Urban Complex Terrain 负载下四足机器人的惯性估计及其在城市复杂地形中的行走控制策略
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354861
Qiang Fu, Muxuan Han, Yunjiang Lou, Ke Li, Zhiyuan Yu
When the quadruped robot is engaged in logistics transportation tasks, it encounters a challenge where the distribution of the center of mass (CoM) of the loaded items is not only random but also subject to time variations. Consequently, the robot becomes susceptible to non-zero resultant torques, which inevitably impact its body posture during the walking process. This paper proposes a method to estimate the CoM inertia using four one-dimensional force sensors and a walking control strategy for complex urban terrain. The inertia tensor and CoM of the load are first estimated, then the robot’s dynamics are compensated, and foothold adjustments are made for underactuated orientations to compensate for the extra moment generated by the CoM offset. For uneven terrain, the terrain estimator and event-based gait are used to adjust the robot’s gait to reduce the impact of terrain changes on the robot. The effectiveness of the proposed method and the feasibility of load walking in urban terrain are verified through comparative experiments, complex terrain load walking experiments in Webots, and real prototype experiments.
当四足机器人执行物流运输任务时,会遇到这样一个挑战:装载物品的质心(CoM)分布不仅是随机的,还会受时间变化的影响。因此,机器人在行走过程中很容易受到非零结果扭矩的影响,从而不可避免地影响其身体姿态。本文提出了一种利用四个一维力传感器估算 CoM 惯量的方法,以及针对复杂城市地形的行走控制策略。首先对负载的惯性张量和CoM进行估算,然后对机器人的动力学进行补偿,并对未充分驱动的方向进行立足点调整,以补偿CoM偏移产生的额外力矩。对于不平坦的地形,则使用地形估计器和基于事件的步态来调整机器人的步态,以减少地形变化对机器人的影响。通过对比实验、Webots 中的复杂地形负重行走实验和实际原型实验,验证了所提方法的有效性和在城市地形中负重行走的可行性。
{"title":"Inertia Estimation of Quadruped Robot under Load and Its Walking Control Strategy in Urban Complex Terrain","authors":"Qiang Fu, Muxuan Han, Yunjiang Lou, Ke Li, Zhiyuan Yu","doi":"10.1109/ROBIO58561.2023.10354861","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354861","url":null,"abstract":"When the quadruped robot is engaged in logistics transportation tasks, it encounters a challenge where the distribution of the center of mass (CoM) of the loaded items is not only random but also subject to time variations. Consequently, the robot becomes susceptible to non-zero resultant torques, which inevitably impact its body posture during the walking process. This paper proposes a method to estimate the CoM inertia using four one-dimensional force sensors and a walking control strategy for complex urban terrain. The inertia tensor and CoM of the load are first estimated, then the robot’s dynamics are compensated, and foothold adjustments are made for underactuated orientations to compensate for the extra moment generated by the CoM offset. For uneven terrain, the terrain estimator and event-based gait are used to adjust the robot’s gait to reduce the impact of terrain changes on the robot. The effectiveness of the proposed method and the feasibility of load walking in urban terrain are verified through comparative experiments, complex terrain load walking experiments in Webots, and real prototype experiments.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"69 11","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fatigue Performance and Impact Toughness of PBF-LB Manufactured Inconel 718 PBF-LB 制造的铬镍铁合金 718 的疲劳性能和冲击韧性
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354797
T. Rautio, M. Jaskari, Haider Ali Bhatti, Aappo Mustakangas, M. Keskitalo, A. Järvenpää
This study investigates the fatigue performance and impact toughness of laser powder bed fusion (PBF-LB) manufactured Inconel 718. Inconel 718 is a nickel-based superalloy known for its high-temperature properties. The PBF-LB process offers accuracy and the ability to produce parts with the final geometry, eliminating the need for expensive machining. These features make it an tempting material also for robotic applications, such as structural components or environments that have elevated temperature or are corrosive. However, the influence of heat treatment on the mechanical and dynamic properties of Inconel 718 is not yet fully understood. The study aims to characterize Inconel 718 specimens through tensile, impact, and fatigue testing, as well as microstructural analysis using Field-Emission Scanning Electron Microscopy (FESEM) with Electron Backscatter Diffraction (EBSD). The results will provide insights into the mechanical behavior of PBF-LB-manufactured Inconel 718, considering printing orientation, mechanical properties, and surface quality. The findings will contribute to the understanding of this material’s dynamic properties, crucial for the design and utilization of components produced through PBF-LB.
本研究调查了激光粉末床熔化(PBF-LB)制造的 Inconel 718 的疲劳性能和冲击韧性。Inconel 718 是一种镍基超级合金,以高温性能著称。PBF-LB 工艺精度高,能够生产出具有最终几何形状的零件,无需进行昂贵的机加工。这些特点使其成为机器人应用(如结构部件或高温或腐蚀性环境)的理想材料。然而,人们对热处理对 Inconel 718 的机械和动态特性的影响尚未完全了解。本研究旨在通过拉伸、冲击和疲劳测试,以及使用场发射扫描电子显微镜(FESEM)和电子背散射衍射(EBSD)进行微结构分析,对 Inconel 718 试样进行表征。考虑到印刷取向、机械性能和表面质量,研究结果将有助于深入了解 PBF-LB 制造的 Inconel 718 的机械性能。研究结果将有助于了解这种材料的动态特性,这对设计和利用 PBF-LB 生产的部件至关重要。
{"title":"Fatigue Performance and Impact Toughness of PBF-LB Manufactured Inconel 718","authors":"T. Rautio, M. Jaskari, Haider Ali Bhatti, Aappo Mustakangas, M. Keskitalo, A. Järvenpää","doi":"10.1109/ROBIO58561.2023.10354797","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354797","url":null,"abstract":"This study investigates the fatigue performance and impact toughness of laser powder bed fusion (PBF-LB) manufactured Inconel 718. Inconel 718 is a nickel-based superalloy known for its high-temperature properties. The PBF-LB process offers accuracy and the ability to produce parts with the final geometry, eliminating the need for expensive machining. These features make it an tempting material also for robotic applications, such as structural components or environments that have elevated temperature or are corrosive. However, the influence of heat treatment on the mechanical and dynamic properties of Inconel 718 is not yet fully understood. The study aims to characterize Inconel 718 specimens through tensile, impact, and fatigue testing, as well as microstructural analysis using Field-Emission Scanning Electron Microscopy (FESEM) with Electron Backscatter Diffraction (EBSD). The results will provide insights into the mechanical behavior of PBF-LB-manufactured Inconel 718, considering printing orientation, mechanical properties, and surface quality. The findings will contribute to the understanding of this material’s dynamic properties, crucial for the design and utilization of components produced through PBF-LB.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"69 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing the Computational Cost of Transformers for Person Re-identification 降低人员重新识别变压器的计算成本
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354731
Wen Wang, Zheyuan Lin, Shanshan Ji, Te Li, J. Gu, Minhong Wan, Chunlong Zhang
Transformer-based visual technologies have witnessed remarkable progress in recent years, and person re-identification (ReID) is one of the active research areas that adopts transformers to improve the performance. However, a major challenge of applying transformers to ReID is the high computational cost, which hinders the real-time deployment of such methods. To address this issue, this paper proposes two simple yet effective techniques to reduce the computation of transformers for ReID. The first technique is to eliminate the invalid patches that do not contain any person information, thereby reducing the number of tokens fed into the transformer. Considering that computational complexity is quadratic with respect to input tokens, the second technique partitions the image into multiple windows, applies separate transformers to each window, and merges class tokens from each window, which can reduce the complexity of the self-attention mechanism. By combining these two techniques, our proposed method reduces the SOTA baseline model by 12.2% FLOPs, while slightly improving the rank-1 accuracy and only sacrificing 1.1% mAP on DukeMTMC-ReID dataset.
近年来,基于变换器的视觉技术取得了显著进展,而人员再识别(ReID)是采用变换器提高性能的活跃研究领域之一。然而,将变换器应用于 ReID 的一个主要挑战是计算成本高,这阻碍了此类方法的实时部署。为解决这一问题,本文提出了两种简单而有效的技术来减少 ReID 变压器的计算量。第一种技术是消除不包含任何人信息的无效补丁,从而减少输入变换器的令牌数量。考虑到计算复杂度与输入标记成二次方关系,第二种技术将图像分割成多个窗口,对每个窗口应用单独的变换器,并合并每个窗口中的类标记,这样可以降低自我关注机制的复杂度。通过将这两种技术相结合,我们提出的方法在 DukeMTMC-ReID 数据集上将 SOTA 基线模型的 FLOP 降低了 12.2%,同时略微提高了秩-1 准确率,只牺牲了 1.1% 的 mAP。
{"title":"Reducing the Computational Cost of Transformers for Person Re-identification","authors":"Wen Wang, Zheyuan Lin, Shanshan Ji, Te Li, J. Gu, Minhong Wan, Chunlong Zhang","doi":"10.1109/ROBIO58561.2023.10354731","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354731","url":null,"abstract":"Transformer-based visual technologies have witnessed remarkable progress in recent years, and person re-identification (ReID) is one of the active research areas that adopts transformers to improve the performance. However, a major challenge of applying transformers to ReID is the high computational cost, which hinders the real-time deployment of such methods. To address this issue, this paper proposes two simple yet effective techniques to reduce the computation of transformers for ReID. The first technique is to eliminate the invalid patches that do not contain any person information, thereby reducing the number of tokens fed into the transformer. Considering that computational complexity is quadratic with respect to input tokens, the second technique partitions the image into multiple windows, applies separate transformers to each window, and merges class tokens from each window, which can reduce the complexity of the self-attention mechanism. By combining these two techniques, our proposed method reduces the SOTA baseline model by 12.2% FLOPs, while slightly improving the rank-1 accuracy and only sacrificing 1.1% mAP on DukeMTMC-ReID dataset.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"69 11","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Magnetic Force Calculation of Permanent Magnet for Magnetic Surgical Instruments 磁性手术器械永久磁铁的磁力计算
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354860
Jingwu Li, Zhijun Sun, Zhongqing Sun, Xing Gao, C. Cao, Yingtian Li
When magnetic surgical instruments are used to perform surgical operations, two situations must be strictly avoided to ensure the safety: 1) the magnetic surgical instrument fall down in the abdominal cavity; 2) the pushing forces between the inner wall of the abdominal cavity and the magnetic surgical instruments are too high to harm human body. However, when calculating the magnetic force applied to the magnetic surgical instruments, the variation of the magnetic field within the space which is occupied by the internal permanent magnets (IPMs), placed inside the surgical instrument, is normally omitted. In this paper, to calculate the magnetic field generated by the external permanent magnets (EPMs), a multi-dipole model is proposed considering the variation of the magnetic field in the region where IPMs locate, and a numerical integration method to calculate the magnetic force is introduced. The experimental results showed that the multi-dipole model could predict the magnetic flux density within the distance of 20 - 50 mm away from the permanent magnet. And the magnetic force calculation model can predict the magnetic force variation trend well.
在使用磁性手术器械进行外科手术时,必须严格避免两种情况以确保安全:1)磁性手术器械掉落在腹腔内;2)腹腔内壁与磁性手术器械之间的推力过大,对人体造成伤害。然而,在计算磁性手术器械所受的磁力时,通常会忽略放置在手术器械内部的内部永久磁铁(IPM)所占空间内的磁场变化。为了计算外部永磁体(EPM)产生的磁场,本文提出了一个考虑到 IPM 所在区域磁场变化的多偶极子模型,并引入了计算磁力的数值积分方法。实验结果表明,多偶极子模型可以预测距离永磁体 20 - 50 毫米范围内的磁通密度。磁力计算模型可以很好地预测磁力的变化趋势。
{"title":"A Magnetic Force Calculation of Permanent Magnet for Magnetic Surgical Instruments","authors":"Jingwu Li, Zhijun Sun, Zhongqing Sun, Xing Gao, C. Cao, Yingtian Li","doi":"10.1109/ROBIO58561.2023.10354860","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354860","url":null,"abstract":"When magnetic surgical instruments are used to perform surgical operations, two situations must be strictly avoided to ensure the safety: 1) the magnetic surgical instrument fall down in the abdominal cavity; 2) the pushing forces between the inner wall of the abdominal cavity and the magnetic surgical instruments are too high to harm human body. However, when calculating the magnetic force applied to the magnetic surgical instruments, the variation of the magnetic field within the space which is occupied by the internal permanent magnets (IPMs), placed inside the surgical instrument, is normally omitted. In this paper, to calculate the magnetic field generated by the external permanent magnets (EPMs), a multi-dipole model is proposed considering the variation of the magnetic field in the region where IPMs locate, and a numerical integration method to calculate the magnetic force is introduced. The experimental results showed that the multi-dipole model could predict the magnetic flux density within the distance of 20 - 50 mm away from the permanent magnet. And the magnetic force calculation model can predict the magnetic force variation trend well.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"59 1","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1