首页 > 最新文献

2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)最新文献

英文 中文
DenseXFormer: An Effective Occluded Human Instance Segmentation Network based on Density Map for Nursing Robot DenseXFormer:基于密度图的护理机器人有效遮挡人体实例分割网络
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354873
Sihao Qi, Jiexin Xie, Haitao Yan, Shijie Guo
Human instance segmentation in occlusion scenarios remains a challenging task, especially in nursing scenarios, which hinders the development of nursing robots. Existing approaches are unable to focus the network’s attention on the occluded areas, which leads to unsatisfactory results. To address this issue, this paper proposes a novel and effective network based on density map in the instance segmentation task. Density map-based neural networks perform well in cases where human bodies occlude each other and can be trained without additional annotation information. Firstly, a density map generator (DMG) is introduced to generate accurate density information from feature maps computed by the backbone. Secondly, using density map enhances features in the density fusion module (DFM), which focuses the network on high-density areas as well as occluded areas. Additionally, to remedy the lack of occlusion-based dataset of nursing instance segmentation, a new dataset NSR-dataset is proposed. A large amount experiments on the public datasets (NSR and COCO-PersonOcc) show that the proposed method can be a powerful instrument for human instance segmentation. The improvements of efficiency with respect to accuracy are both prominent. The dataset can be got at https://github.com/Monkey0806/NSR-dataset.
遮挡场景下的人体实例分割仍然是一项具有挑战性的任务,尤其是在护理场景中,这阻碍了护理机器人的发展。现有的方法无法将网络的注意力集中在闭塞区域,导致效果不理想。针对这一问题,本文提出了一种基于密度图的新型有效网络,用于实例分割任务。基于密度图的神经网络在人体相互遮挡的情况下表现良好,而且无需额外的注释信息即可进行训练。首先,引入密度图生成器(DMG),从骨干计算的特征图中生成精确的密度信息。其次,使用密度图可以增强密度融合模块(DFM)中的特征,从而使网络聚焦于高密度区域和闭塞区域。此外,为了弥补基于闭塞的护理实例分割数据集的不足,我们提出了一个新的数据集 NSR-数据集。在公共数据集(NSR 和 COCO-PersonOcc)上进行的大量实验表明,所提出的方法可以成为人体实例分割的有力工具。效率和准确性都有显著提高。数据集可从 https://github.com/Monkey0806/NSR-dataset 获取。
{"title":"DenseXFormer: An Effective Occluded Human Instance Segmentation Network based on Density Map for Nursing Robot","authors":"Sihao Qi, Jiexin Xie, Haitao Yan, Shijie Guo","doi":"10.1109/ROBIO58561.2023.10354873","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354873","url":null,"abstract":"Human instance segmentation in occlusion scenarios remains a challenging task, especially in nursing scenarios, which hinders the development of nursing robots. Existing approaches are unable to focus the network’s attention on the occluded areas, which leads to unsatisfactory results. To address this issue, this paper proposes a novel and effective network based on density map in the instance segmentation task. Density map-based neural networks perform well in cases where human bodies occlude each other and can be trained without additional annotation information. Firstly, a density map generator (DMG) is introduced to generate accurate density information from feature maps computed by the backbone. Secondly, using density map enhances features in the density fusion module (DFM), which focuses the network on high-density areas as well as occluded areas. Additionally, to remedy the lack of occlusion-based dataset of nursing instance segmentation, a new dataset NSR-dataset is proposed. A large amount experiments on the public datasets (NSR and COCO-PersonOcc) show that the proposed method can be a powerful instrument for human instance segmentation. The improvements of efficiency with respect to accuracy are both prominent. The dataset can be got at https://github.com/Monkey0806/NSR-dataset.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"44 4","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time RGB-D Pedestrian Tracking for Mobile Robot 移动机器人的实时 RGB-D 行人跟踪
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354856
Wenhao Liu, Wanlei Li, Tao Wang, Jun He, Yunjiang Lou
Pedestrian tracking is an important research direction in the field of mobile robotics. In order to complete tasks more efficiently and without hindering the original intention of pedestrians, mobile robots need to track pedestrians accurately in real time. In this paper, we propose a real-time RGB-D pedestrian tracking framework. First, we propose a pedestrian segmentation detection algorithm to detect pedestrians and obtain their two-dimensional positions. Second, due to limited computational resources and the rarity of missed detection for pedestrians, we use an nearest neighbor tracker for pedestrian tracking. To address the issue of inaccurate pedestrian localization, we use our detection algorithm to obtain the center of pedestrians from RGB images. By combining them with point clouds, the 2D coordinates of pedestrians are obtained. Our method enables accurate pedestrian tracking in the world coordinate, by adaptively fusing RGB images with their corresponding depth-based point clouds. Besides, our light-weight detection and tracking algorithm guarantee the real-time pedestrian tracking for realistic mobile robot applications. To validate the effectiveness and real-time performance of tracking algorithm, we conduct experiments using multiple pedestrian datasets of approximately half a minute in length, captured from two different perspectives. To validate the practicality and accuracy of the tracking algorithm in real-world scenarios, we extend our tracking algorithm to apply it to trajectory prediction.
行人跟踪是移动机器人领域的一个重要研究方向。为了更高效地完成任务,同时不妨碍行人的原意,移动机器人需要实时准确地跟踪行人。在本文中,我们提出了一种实时 RGB-D 行人跟踪框架。首先,我们提出了一种行人分割检测算法来检测行人并获取其二维位置。其次,由于计算资源有限以及行人漏检的罕见性,我们使用近邻跟踪器进行行人跟踪。为了解决行人定位不准确的问题,我们使用检测算法从 RGB 图像中获取行人的中心点。将它们与点云相结合,就能得到行人的二维坐标。我们的方法通过自适应融合 RGB 图像和相应的基于深度的点云,实现了在世界坐标上对行人的精确跟踪。此外,我们的轻量级检测和跟踪算法保证了行人跟踪的实时性,适用于现实的移动机器人应用。为了验证跟踪算法的有效性和实时性,我们使用从两个不同视角捕捉到的长度约为半分钟的多个行人数据集进行了实验。为了验证跟踪算法在现实世界中的实用性和准确性,我们扩展了跟踪算法,将其应用于轨迹预测。
{"title":"Real-Time RGB-D Pedestrian Tracking for Mobile Robot","authors":"Wenhao Liu, Wanlei Li, Tao Wang, Jun He, Yunjiang Lou","doi":"10.1109/ROBIO58561.2023.10354856","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354856","url":null,"abstract":"Pedestrian tracking is an important research direction in the field of mobile robotics. In order to complete tasks more efficiently and without hindering the original intention of pedestrians, mobile robots need to track pedestrians accurately in real time. In this paper, we propose a real-time RGB-D pedestrian tracking framework. First, we propose a pedestrian segmentation detection algorithm to detect pedestrians and obtain their two-dimensional positions. Second, due to limited computational resources and the rarity of missed detection for pedestrians, we use an nearest neighbor tracker for pedestrian tracking. To address the issue of inaccurate pedestrian localization, we use our detection algorithm to obtain the center of pedestrians from RGB images. By combining them with point clouds, the 2D coordinates of pedestrians are obtained. Our method enables accurate pedestrian tracking in the world coordinate, by adaptively fusing RGB images with their corresponding depth-based point clouds. Besides, our light-weight detection and tracking algorithm guarantee the real-time pedestrian tracking for realistic mobile robot applications. To validate the effectiveness and real-time performance of tracking algorithm, we conduct experiments using multiple pedestrian datasets of approximately half a minute in length, captured from two different perspectives. To validate the practicality and accuracy of the tracking algorithm in real-world scenarios, we extend our tracking algorithm to apply it to trajectory prediction.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"43 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feature Fusion Module Based on Gate Mechanism for Object Detection 基于门机制的物体检测特征融合模块
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354575
Zepeng Sun, Dongyin Jin, Jian Deng, Mengyang Zhang, Zhenzhou Shao
In recent years, deep learning based feature fusion has drawn significant attention in the field of information integration due to its robust representational and generative capabilities. However, existing methods struggle to effectively preserve essential information. To this end, this paper proposes a gate-based fusion module for object detection to integrate the information from distinct feature layers of convolutional neural networks. The gate structure of the fusion module adaptively selects features from neighboring layers, storing valuable information in memory units and passing it to the subsequent layer. This approach facilitates the fusion of high-level semantic and low-level detailed features. Experimental validation is conducted on the public Pascal VOC dataset. Experiments results demonstrate that the addition of the gate-based fusion module to the detection task leads to an average accuracy increment of up to 5%.
近年来,基于深度学习的特征融合因其强大的表征和生成能力而在信息整合领域备受关注。然而,现有的方法难以有效保留基本信息。为此,本文提出了一种基于门的物体检测融合模块,用于整合卷积神经网络不同特征层的信息。融合模块的门结构能自适应地选择相邻层的特征,将有价值的信息存储在内存单元中,并传递给后续层。这种方法有助于融合高层语义特征和低层细节特征。实验验证是在公开的 Pascal VOC 数据集上进行的。实验结果表明,在检测任务中添加基于门的融合模块后,平均准确率可提高 5%。
{"title":"Feature Fusion Module Based on Gate Mechanism for Object Detection","authors":"Zepeng Sun, Dongyin Jin, Jian Deng, Mengyang Zhang, Zhenzhou Shao","doi":"10.1109/ROBIO58561.2023.10354575","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354575","url":null,"abstract":"In recent years, deep learning based feature fusion has drawn significant attention in the field of information integration due to its robust representational and generative capabilities. However, existing methods struggle to effectively preserve essential information. To this end, this paper proposes a gate-based fusion module for object detection to integrate the information from distinct feature layers of convolutional neural networks. The gate structure of the fusion module adaptively selects features from neighboring layers, storing valuable information in memory units and passing it to the subsequent layer. This approach facilitates the fusion of high-level semantic and low-level detailed features. Experimental validation is conducted on the public Pascal VOC dataset. Experiments results demonstrate that the addition of the gate-based fusion module to the detection task leads to an average accuracy increment of up to 5%.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"17 6","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fog-based Distributed Camera Network system for Surveillance Applications 用于监控应用的基于雾的分布式摄像机网络系统
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10355008
Mvs Sakethram, Ps Saikrishna
The Internet of Things (IoT) refers to a network of interconnected physical devices embedded with sensors, software, and network connectivity that enables them to collect and exchange data. Cloud computing refers to the delivery of computing resources and services over the Internet. The time it takes for IoT data to transit to the cloud and back might have a substantial influence on the performance, especially for applications that need low latency. Fog computing has been proposed for this constraint. Many issues need to be resolved in order to fully utilize the real-time analytics capabilities of the fog and IoT paradigms. In this paper, we worked extensively using a simulator called iFogsim, to model IoT and Fog environments with real-world challenges and discussed mainly the data transmission between the fog nodes. We describe a case study and added constraints that make the a realistic fog environment with a Distributed Camera Network System (DCNS).
物联网(IoT)是指由嵌入了传感器、软件和网络连接的互联物理设备组成的网络,使这些设备能够收集和交换数据。云计算是指通过互联网提供计算资源和服务。物联网数据传输到云端再返回所需的时间可能会对性能产生重大影响,尤其是对于需要低延迟的应用而言。针对这一限制,有人提出了雾计算。要充分利用雾和物联网范例的实时分析能力,还需要解决许多问题。在本文中,我们广泛使用了名为 iFogsim 的模拟器来模拟物联网和雾环境,并主要讨论了雾节点之间的数据传输问题。我们描述了一个案例研究,并添加了一些约束条件,使分布式摄像机网络系统(DCNS)成为一个真实的雾环境。
{"title":"Fog-based Distributed Camera Network system for Surveillance Applications","authors":"Mvs Sakethram, Ps Saikrishna","doi":"10.1109/ROBIO58561.2023.10355008","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10355008","url":null,"abstract":"The Internet of Things (IoT) refers to a network of interconnected physical devices embedded with sensors, software, and network connectivity that enables them to collect and exchange data. Cloud computing refers to the delivery of computing resources and services over the Internet. The time it takes for IoT data to transit to the cloud and back might have a substantial influence on the performance, especially for applications that need low latency. Fog computing has been proposed for this constraint. Many issues need to be resolved in order to fully utilize the real-time analytics capabilities of the fog and IoT paradigms. In this paper, we worked extensively using a simulator called iFogsim, to model IoT and Fog environments with real-world challenges and discussed mainly the data transmission between the fog nodes. We describe a case study and added constraints that make the a realistic fog environment with a Distributed Camera Network System (DCNS).","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"111 12","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shape Analysis and Control of a Continuum Objects* 连续物体的形状分析和控制 *
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354616
Yuqiao Dai, Peng Li, Shilin Zhang, Yunhui Liu
Soft robots are a hot spot in today's robotic research, because most of them exist in the form of continuums, and the current continuum is difficult to recognize the shape and reproduce the corresponding shape. In this paper, we propose a method, in which the shape features of the flexible continuum are obtained by contour centerline extraction and binocular camera reconstruction and the modeling of the relationship between the motor input and the shape output of the continuum is completed using neural networks. Simulation environment is set up to test the shape estimation and shape control of the flexible continuum. Results show that this method can prediction and reproduce the shape of the continuum well. This method can be used in shape control of the continuum robot.
软体机器人是当今机器人研究的一个热点,因为软体机器人大多以连续体的形式存在,而目前的连续体很难识别形状并再现相应的形状。本文提出了一种方法,通过轮廓中心线提取和双目摄像头重建获得柔性连续体的形状特征,并利用神经网络完成电机输入与连续体形状输出之间关系的建模。建立了仿真环境来测试柔性连续体的形状估计和形状控制。结果表明,这种方法可以很好地预测和再现连续体的形状。这种方法可用于连续体机器人的形状控制。
{"title":"Shape Analysis and Control of a Continuum Objects*","authors":"Yuqiao Dai, Peng Li, Shilin Zhang, Yunhui Liu","doi":"10.1109/ROBIO58561.2023.10354616","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354616","url":null,"abstract":"Soft robots are a hot spot in today's robotic research, because most of them exist in the form of continuums, and the current continuum is difficult to recognize the shape and reproduce the corresponding shape. In this paper, we propose a method, in which the shape features of the flexible continuum are obtained by contour centerline extraction and binocular camera reconstruction and the modeling of the relationship between the motor input and the shape output of the continuum is completed using neural networks. Simulation environment is set up to test the shape estimation and shape control of the flexible continuum. Results show that this method can prediction and reproduce the shape of the continuum well. This method can be used in shape control of the continuum robot.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"111 7","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Horizontal Following Control of a Suspended Robot for Self-Momentum Targets 悬挂式机器人对自重目标的水平跟随控制研究
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354971
Dan Xiong, Yiyong Huang, Yanjie Yang, Hongwei Liu, Zhijie Jiang, Wei Han
Micro/low gravity is one of the most prominent features of the outer space environment, and it significantly alters the force state and dynamics of spacecraft or astronauts compared to the Earth’s gravitational environment. It is crucial to simulate the micro/low gravity environment on the ground for astronaut training or spacecraft testing. The suspension method utilizes a pulley and sling mechanism to create a micro-low gravity environment. This method counteracts the gravitational force exerted by the object based on rope tension. The simulation effect greatly depends on the accuracy of the horizontal following system, which serves as the central subsystem of the suspension device. In this paper, we propose a dual-arm following system to solve the issue of horizontal following for self-momentum targets. In addition, we conduct research on adaptive inhibition technology for flexible rope swing, and coupling control between a robotic arm and a crane. Physical experiments are conducted on the robotic system to verify the effectiveness of the proposed approach.
微/低重力是外层空间环境最突出的特征之一,与地球重力环境相比,它极大地改变了航天器或宇航员的受力状态和动力学特性。在地面模拟微重力/低重力环境对于宇航员训练或航天器测试至关重要。悬挂法利用滑轮和吊索装置来创造微低重力环境。这种方法根据绳索张力抵消物体施加的重力。模拟效果在很大程度上取决于水平跟随系统的精度,该系统是悬挂装置的核心子系统。本文提出了一种双臂跟随系统,以解决自动量目标的水平跟随问题。此外,我们还对柔性摆绳的自适应抑制技术以及机械臂与起重机之间的耦合控制进行了研究。我们在机器人系统上进行了物理实验,以验证所提方法的有效性。
{"title":"Research on Horizontal Following Control of a Suspended Robot for Self-Momentum Targets","authors":"Dan Xiong, Yiyong Huang, Yanjie Yang, Hongwei Liu, Zhijie Jiang, Wei Han","doi":"10.1109/ROBIO58561.2023.10354971","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354971","url":null,"abstract":"Micro/low gravity is one of the most prominent features of the outer space environment, and it significantly alters the force state and dynamics of spacecraft or astronauts compared to the Earth’s gravitational environment. It is crucial to simulate the micro/low gravity environment on the ground for astronaut training or spacecraft testing. The suspension method utilizes a pulley and sling mechanism to create a micro-low gravity environment. This method counteracts the gravitational force exerted by the object based on rope tension. The simulation effect greatly depends on the accuracy of the horizontal following system, which serves as the central subsystem of the suspension device. In this paper, we propose a dual-arm following system to solve the issue of horizontal following for self-momentum targets. In addition, we conduct research on adaptive inhibition technology for flexible rope swing, and coupling control between a robotic arm and a crane. Physical experiments are conducted on the robotic system to verify the effectiveness of the proposed approach.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"88 12","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139186736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visual Servoing Using Cosine Similarity Metric 使用余弦相似度量进行视觉伺服
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354973
Wenbo Ning, Yecan Yin, Xiangfei Li, Huan Zhao, Yunfeng Fu, Han Ding
This article presents a new visual servoing method based on cosine similarity metric, which focuses on utilizing cosine distance defined by cosine similarity as the optimization objective of histogram-based direct visual servoing (HDVS) to design the servoing control law. As a more compact global descriptor, the histogram makes direct visual servoing more robust against noise than directly using image intensity. Cosine similarity is the cosine value between two vectors, which has been widely employed to calculate the similarity between multidimensional information. The cosine distance derived from the cosine similarity is more sensitive to the directional difference between the histograms, making the proposed method have a larger convergence rate than the existing Matusita distance-based servoing method. This advantage is verified by simulations, and experiments are conducted on a manipulator to further verify the effectiveness of the proposed method in practical situations.
本文提出了一种基于余弦相似度的新型视觉舵机控制方法,该方法主要利用余弦相似度定义的余弦距离作为基于直方图的直接视觉舵机控制(HDVS)的优化目标,从而设计舵机控制法则。与直接使用图像强度相比,直方图作为一种更紧凑的全局描述符,使直接视觉舵机具有更强的抗噪能力。余弦相似度是两个向量之间的余弦值,被广泛用于计算多维信息之间的相似度。由余弦相似度推导出的余弦距离对直方图之间的方向差异更为敏感,因此与现有的基于 Matusita 距离的伺服方法相比,所提出的方法具有更大的收敛率。模拟验证了这一优势,并在机械手上进行了实验,进一步验证了所提方法在实际情况中的有效性。
{"title":"Visual Servoing Using Cosine Similarity Metric","authors":"Wenbo Ning, Yecan Yin, Xiangfei Li, Huan Zhao, Yunfeng Fu, Han Ding","doi":"10.1109/ROBIO58561.2023.10354973","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354973","url":null,"abstract":"This article presents a new visual servoing method based on cosine similarity metric, which focuses on utilizing cosine distance defined by cosine similarity as the optimization objective of histogram-based direct visual servoing (HDVS) to design the servoing control law. As a more compact global descriptor, the histogram makes direct visual servoing more robust against noise than directly using image intensity. Cosine similarity is the cosine value between two vectors, which has been widely employed to calculate the similarity between multidimensional information. The cosine distance derived from the cosine similarity is more sensitive to the directional difference between the histograms, making the proposed method have a larger convergence rate than the existing Matusita distance-based servoing method. This advantage is verified by simulations, and experiments are conducted on a manipulator to further verify the effectiveness of the proposed method in practical situations.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"70 9","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Enhanced Network Swin-T by CNN on Flow Pattern Recognition for Two-phase Image Dataset with Low Similarity 低相似度两相图像数据集的增强型网络 Swin-T
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354651
Jinsong. Zhang, Deling. Wang, Huadan. Hao, Liangwen. Yan
In the two-phase flow experiments with different conditions of materials and process parameters, the collected image dataset with the low similarity and small amount was difficult for the common deep learning algorithms to achieve a high-precision recognition of flow pattern. due to the low extraction capability of global features. In this article, we proposed a new deep learning algorithm to enhance Swin-T network by CNN which combined the advantages of Swin-T network with Dynamic Region-Aware Convolution. The new algorithm retained the window multi-head self-attention mechanism and added the self-attention adjustment module to enhance the extraction of image features and the convergence speed of network. It significantly improved the recognition accuracy of the different flow patterns in the sharp and blurred images. The enhanced network Swin-T by CNN had the high applicability to the classification of image dataset with low similarity and small amount.
在不同材料条件和工艺参数的两相流实验中,采集到的图像数据集相似度低、数量少,由于全局特征提取能力低,普通深度学习算法难以实现对流动模式的高精度识别。本文结合 Swin-T 网络和动态区域感知卷积的优点,提出了一种新的深度学习算法,通过 CNN 增强 Swin-T 网络。新算法保留了窗口多头自注意机制,并增加了自注意调整模块,提高了图像特征的提取能力和网络的收敛速度。它大大提高了对清晰和模糊图像中不同流动模式的识别准确率。CNN 增强网络 Swin-T 对相似度低、数量少的图像数据集的分类具有很高的适用性。
{"title":"The Enhanced Network Swin-T by CNN on Flow Pattern Recognition for Two-phase Image Dataset with Low Similarity","authors":"Jinsong. Zhang, Deling. Wang, Huadan. Hao, Liangwen. Yan","doi":"10.1109/ROBIO58561.2023.10354651","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354651","url":null,"abstract":"In the two-phase flow experiments with different conditions of materials and process parameters, the collected image dataset with the low similarity and small amount was difficult for the common deep learning algorithms to achieve a high-precision recognition of flow pattern. due to the low extraction capability of global features. In this article, we proposed a new deep learning algorithm to enhance Swin-T network by CNN which combined the advantages of Swin-T network with Dynamic Region-Aware Convolution. The new algorithm retained the window multi-head self-attention mechanism and added the self-attention adjustment module to enhance the extraction of image features and the convergence speed of network. It significantly improved the recognition accuracy of the different flow patterns in the sharp and blurred images. The enhanced network Swin-T by CNN had the high applicability to the classification of image dataset with low similarity and small amount.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"59 3","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inertia Estimation of Quadruped Robot under Load and Its Walking Control Strategy in Urban Complex Terrain 负载下四足机器人的惯性估计及其在城市复杂地形中的行走控制策略
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354861
Qiang Fu, Muxuan Han, Yunjiang Lou, Ke Li, Zhiyuan Yu
When the quadruped robot is engaged in logistics transportation tasks, it encounters a challenge where the distribution of the center of mass (CoM) of the loaded items is not only random but also subject to time variations. Consequently, the robot becomes susceptible to non-zero resultant torques, which inevitably impact its body posture during the walking process. This paper proposes a method to estimate the CoM inertia using four one-dimensional force sensors and a walking control strategy for complex urban terrain. The inertia tensor and CoM of the load are first estimated, then the robot’s dynamics are compensated, and foothold adjustments are made for underactuated orientations to compensate for the extra moment generated by the CoM offset. For uneven terrain, the terrain estimator and event-based gait are used to adjust the robot’s gait to reduce the impact of terrain changes on the robot. The effectiveness of the proposed method and the feasibility of load walking in urban terrain are verified through comparative experiments, complex terrain load walking experiments in Webots, and real prototype experiments.
当四足机器人执行物流运输任务时,会遇到这样一个挑战:装载物品的质心(CoM)分布不仅是随机的,还会受时间变化的影响。因此,机器人在行走过程中很容易受到非零结果扭矩的影响,从而不可避免地影响其身体姿态。本文提出了一种利用四个一维力传感器估算 CoM 惯量的方法,以及针对复杂城市地形的行走控制策略。首先对负载的惯性张量和CoM进行估算,然后对机器人的动力学进行补偿,并对未充分驱动的方向进行立足点调整,以补偿CoM偏移产生的额外力矩。对于不平坦的地形,则使用地形估计器和基于事件的步态来调整机器人的步态,以减少地形变化对机器人的影响。通过对比实验、Webots 中的复杂地形负重行走实验和实际原型实验,验证了所提方法的有效性和在城市地形中负重行走的可行性。
{"title":"Inertia Estimation of Quadruped Robot under Load and Its Walking Control Strategy in Urban Complex Terrain","authors":"Qiang Fu, Muxuan Han, Yunjiang Lou, Ke Li, Zhiyuan Yu","doi":"10.1109/ROBIO58561.2023.10354861","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354861","url":null,"abstract":"When the quadruped robot is engaged in logistics transportation tasks, it encounters a challenge where the distribution of the center of mass (CoM) of the loaded items is not only random but also subject to time variations. Consequently, the robot becomes susceptible to non-zero resultant torques, which inevitably impact its body posture during the walking process. This paper proposes a method to estimate the CoM inertia using four one-dimensional force sensors and a walking control strategy for complex urban terrain. The inertia tensor and CoM of the load are first estimated, then the robot’s dynamics are compensated, and foothold adjustments are made for underactuated orientations to compensate for the extra moment generated by the CoM offset. For uneven terrain, the terrain estimator and event-based gait are used to adjust the robot’s gait to reduce the impact of terrain changes on the robot. The effectiveness of the proposed method and the feasibility of load walking in urban terrain are verified through comparative experiments, complex terrain load walking experiments in Webots, and real prototype experiments.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"69 11","pages":"1-7"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty in Bayesian Reinforcement Learning for Robot Manipulation Tasks with Sparse Rewards 针对奖励稀疏的机器人操纵任务的贝叶斯强化学习中的不确定性
Pub Date : 2023-12-04 DOI: 10.1109/ROBIO58561.2023.10354785
Li Zheng, Yanghong Li, Yahao Wang, Guangrui Bai, Haiyang He, Erbao Dong
This paper aims to explore the application of Bayesian deep reinforcement learning (BDRL) in robot manipulation tasks with sparse rewards, focusing on addressing the uncertainty in complex and sparsely rewarded environments. Conventional deep reinforcement learning (DRL) algorithms still face significant challenges in the context of robot manipulation tasks. To address this issue, this paper proposes a general algorithm framework called BDRL that combines reinforcement learning algorithms with Bayesian networks to quantify the model uncertainty, aleatoric uncertainty in neural networks, and uncertainty in the reward function. The effectiveness and generality of the proposed algorithm are validated through simulation experiments on multiple sets of different sparsely rewarded tasks, employing various advanced DRL algorithms. The research results demonstrate that the DRL algorithm based on the Bayesian network mechanism significantly improves the convergence speed of the algorithms in sparse reward tasks by accurately estimating the model uncertainty.
本文旨在探索贝叶斯深度强化学习(BDRL)在奖励稀疏的机器人操纵任务中的应用,重点是解决复杂和奖励稀疏环境中的不确定性问题。传统的深度强化学习(DRL)算法在机器人操纵任务中仍面临巨大挑战。为解决这一问题,本文提出了一种名为 BDRL 的通用算法框架,该框架将强化学习算法与贝叶斯网络相结合,以量化模型的不确定性、神经网络的不确定性以及奖励函数的不确定性。本文采用各种先进的 DRL 算法,通过对多组不同的稀疏奖励任务进行模拟实验,验证了所提算法的有效性和通用性。研究结果表明,基于贝叶斯网络机制的 DRL 算法通过准确估计模型的不确定性,显著提高了稀疏奖励任务中算法的收敛速度。
{"title":"Uncertainty in Bayesian Reinforcement Learning for Robot Manipulation Tasks with Sparse Rewards","authors":"Li Zheng, Yanghong Li, Yahao Wang, Guangrui Bai, Haiyang He, Erbao Dong","doi":"10.1109/ROBIO58561.2023.10354785","DOIUrl":"https://doi.org/10.1109/ROBIO58561.2023.10354785","url":null,"abstract":"This paper aims to explore the application of Bayesian deep reinforcement learning (BDRL) in robot manipulation tasks with sparse rewards, focusing on addressing the uncertainty in complex and sparsely rewarded environments. Conventional deep reinforcement learning (DRL) algorithms still face significant challenges in the context of robot manipulation tasks. To address this issue, this paper proposes a general algorithm framework called BDRL that combines reinforcement learning algorithms with Bayesian networks to quantify the model uncertainty, aleatoric uncertainty in neural networks, and uncertainty in the reward function. The effectiveness and generality of the proposed algorithm are validated through simulation experiments on multiple sets of different sparsely rewarded tasks, employing various advanced DRL algorithms. The research results demonstrate that the DRL algorithm based on the Bayesian network mechanism significantly improves the convergence speed of the algorithms in sparse reward tasks by accurately estimating the model uncertainty.","PeriodicalId":505134,"journal":{"name":"2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"48 6","pages":"1-6"},"PeriodicalIF":0.0,"publicationDate":"2023-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139187288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE International Conference on Robotics and Biomimetics (ROBIO)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1