首页 > 最新文献

2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)最新文献

英文 中文
Additively Manufactured Primitive Plastic Phantom for Calibration of Low-Resolution Computed Tomography Cone Beam Scanner for Additive Creation of 3D Copies using Inverse Radon Transform 用于校正低分辨率计算机断层扫描锥形束扫描仪的增材制造原始塑料模体,用于使用逆氡变换进行3D副本的增材创建
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011777
Valentin Ameres, Meriem Chetmi, Lucas Artmann, Tim C. Lueth
Computed Tomography (CT) and 3D reconstruction contribute significantly to reverse engineering as well as to additive manufacturing. Utilizing CT scans, surface information as well as inner details of objects of interest can be recorded non-destructively. In this work, a low-resolution computed tomography cone beam (CBCT) scanner was used to scan, reconstruct and print plastic components in order to create 3D copies. Software based calibration using an additively manufactured two layer plastic phantom containing steel ball bearings was used to detect and correct geometrical alignment errors and improve reconstruction quality. A phantom was designed to be printed additively and assembled without the help of further tools, with an axial connection to the CBCT. Corrections were applied to the two-dimensional 300x300 pixel X-ray projections before reconstruction. A reconstructed volume of 212x212x212 voxels was achieved using either the inverse-Radon-Transformation-based Feldkamp Davis Krauss (FDK) or Simultaneous Algebraic Reconstruction Technique (SART) algorithm. In an experiment, a plastic phantom was fabricated and used for misalignment correction. Two reconstructions of uncorrected and corrected projections of a 30 mm plastic cube with center bore were subsequently compared to each other in terms of density. The cube reconstructed from corrected projections had higher voxel density values and sharper slices, showing the successful fabrication and use of the plastic phantom.
计算机断层扫描(CT)和3D重建对逆向工程和增材制造做出了重大贡献。利用CT扫描,可以无损地记录感兴趣物体的表面信息和内部细节。在这项工作中,使用低分辨率计算机断层扫描锥束(CBCT)扫描仪扫描,重建和打印塑料部件,以创建3D副本。利用增材制造的含钢球轴承双层塑料模体进行软件标定,检测和修正几何对中误差,提高重建质量。设计了一个模体,无需其他工具即可打印和组装,并与CBCT进行轴向连接。重建前对二维300x300像素x射线投影进行校正。使用基于反radon变换的Feldkamp Davis Krauss (FDK)或同步代数重建技术(SART)算法实现了212x212x212体素的重建体积。在实验中,制作了一种塑料模体,并将其用于校准误差。随后,在密度方面相互比较了30 mm具有中心孔的塑料立方体的未校正和校正投影的两个重建。通过修正投影重建的立方体具有更高的体素密度值和更清晰的切片,表明塑料幻影的成功制造和使用。
{"title":"Additively Manufactured Primitive Plastic Phantom for Calibration of Low-Resolution Computed Tomography Cone Beam Scanner for Additive Creation of 3D Copies using Inverse Radon Transform","authors":"Valentin Ameres, Meriem Chetmi, Lucas Artmann, Tim C. Lueth","doi":"10.1109/ROBIO55434.2022.10011777","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011777","url":null,"abstract":"Computed Tomography (CT) and 3D reconstruction contribute significantly to reverse engineering as well as to additive manufacturing. Utilizing CT scans, surface information as well as inner details of objects of interest can be recorded non-destructively. In this work, a low-resolution computed tomography cone beam (CBCT) scanner was used to scan, reconstruct and print plastic components in order to create 3D copies. Software based calibration using an additively manufactured two layer plastic phantom containing steel ball bearings was used to detect and correct geometrical alignment errors and improve reconstruction quality. A phantom was designed to be printed additively and assembled without the help of further tools, with an axial connection to the CBCT. Corrections were applied to the two-dimensional 300x300 pixel X-ray projections before reconstruction. A reconstructed volume of 212x212x212 voxels was achieved using either the inverse-Radon-Transformation-based Feldkamp Davis Krauss (FDK) or Simultaneous Algebraic Reconstruction Technique (SART) algorithm. In an experiment, a plastic phantom was fabricated and used for misalignment correction. Two reconstructions of uncorrected and corrected projections of a 30 mm plastic cube with center bore were subsequently compared to each other in terms of density. The cube reconstructed from corrected projections had higher voxel density values and sharper slices, showing the successful fabrication and use of the plastic phantom.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130600310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Precise Model-free Robotic Grasping with Sim-to-Real Transfer Learning 基于模拟到真实迁移学习的机器人精确抓取
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011794
Lei Zhang, Kaixin Bai, Zhaopeng Chen, Yunlei Shi, Jianwei Zhang
Precise robotic grasping of several novel objects is a huge challenge in manufacturing, automation, and logistics. Most of the current methods for model-free grasping are disadvantaged by the sparse data in grasping datasets and by errors in sensor data and contact models. This study combines data generation and sim - to- real transfer learning in a grasping framework that reduces the sim-to-real gap and enables precise and reliable model-free grasping. A large-scale robotic grasping dataset with dense grasp labels is generated using domain randomization methods and a novel data augmentation method for deep learning-based robotic grasping to solve data sparse problem. We present an end-to-end robotic grasping network with a grasp optimizer. The grasp policies are trained with sim-to-real transfer learning. The presented results suggest that our grasping framework reduces the uncertainties in grasping datasets, sensor data, and contact models. In physical robotic experiments, our grasping framework grasped single known objects and novel complex-shaped household objects with a success rate of 90.91%. In a complex scenario with multi-objects robotic grasping, the success rate was 85.71%. The proposed grasping framework outperformed two state-of-the-art methods in both known and unknown object robotic grasping.
在制造、自动化和物流领域,机器人对一些新物体的精确抓取是一个巨大的挑战。现有的无模型抓取方法大多存在数据稀疏、传感器数据和接触模型存在误差等缺点。本研究将数据生成和模拟到真实的迁移学习结合在一个抓取框架中,减少了模拟到真实的差距,实现了精确和可靠的无模型抓取。采用领域随机化方法生成了具有密集抓取标签的大规模机器人抓取数据集,并提出了一种新的基于深度学习的机器人抓取数据增强方法来解决数据稀疏问题。我们提出了一个端到端机器人抓取网络与抓取优化器。通过模拟到真实的迁移学习对抓取策略进行训练。结果表明,我们的抓取框架减少了抓取数据集、传感器数据和接触模型中的不确定性。在物理机器人实验中,我们的抓取框架抓取已知的单个物体和新型复杂形状的家居物体,成功率为90.91%。在多目标机器人抓取的复杂场景中,成功率为85.71%。所提出的抓取框架在已知和未知物体机器人抓取方面都优于两种最先进的抓取方法。
{"title":"Towards Precise Model-free Robotic Grasping with Sim-to-Real Transfer Learning","authors":"Lei Zhang, Kaixin Bai, Zhaopeng Chen, Yunlei Shi, Jianwei Zhang","doi":"10.1109/ROBIO55434.2022.10011794","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011794","url":null,"abstract":"Precise robotic grasping of several novel objects is a huge challenge in manufacturing, automation, and logistics. Most of the current methods for model-free grasping are disadvantaged by the sparse data in grasping datasets and by errors in sensor data and contact models. This study combines data generation and sim - to- real transfer learning in a grasping framework that reduces the sim-to-real gap and enables precise and reliable model-free grasping. A large-scale robotic grasping dataset with dense grasp labels is generated using domain randomization methods and a novel data augmentation method for deep learning-based robotic grasping to solve data sparse problem. We present an end-to-end robotic grasping network with a grasp optimizer. The grasp policies are trained with sim-to-real transfer learning. The presented results suggest that our grasping framework reduces the uncertainties in grasping datasets, sensor data, and contact models. In physical robotic experiments, our grasping framework grasped single known objects and novel complex-shaped household objects with a success rate of 90.91%. In a complex scenario with multi-objects robotic grasping, the success rate was 85.71%. The proposed grasping framework outperformed two state-of-the-art methods in both known and unknown object robotic grasping.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121652778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognition of Degradation Scenarios for LiDAR SLAM Applications 激光雷达SLAM应用中退化场景的识别
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011727
Chenglin Yang, Zihao Chai, Xiaoxiao Yang, Hanyang Zhuang, Ming Yang
The SLAM system, which uses 3D LiDAR as the only sensor, is prone to degradation when facing a scenario with sparse structure and fewer constraints. It cannot solve the robot pose based on limited LiDAR constraint information, which leads to the localization failure and mapping failure of the SLAM system. Due to the limitations of LiDAR, it is difficult to only rely on the point cloud data provided by LiDAR to solve the problem of localization and mapping of degraded scenarios. Currently, the mainstream is to provide additional information through multi-sensor fusion and other schemes to restrict and correct the system's attitude. In the multi-source fusion system, it is still essential to determine the information reliability of each sensor source in different directions. Hence, the recognition of the degradation scenario has significant research value. In this paper, three schemes, geometric information, constraint distur-bance, and residual disturbance, are designed to quantitatively identify the degradation state of the system and estimate the degradation direction. Through experimental verification, the proposed schemes have a favorable recognition effect in the degradation scenario of the simulation environment and real environment.
SLAM系统使用3D激光雷达作为唯一的传感器,在面对稀疏结构和较少约束的场景时容易退化。基于有限的LiDAR约束信息无法求解机器人位姿,导致SLAM系统定位失败和测绘失败。由于激光雷达的局限性,仅依靠激光雷达提供的点云数据很难解决退化场景的定位和制图问题。目前,主流是通过多传感器融合等方案提供附加信息来约束和纠正系统的姿态。在多源融合系统中,确定各个传感器源在不同方向上的信息可靠性仍然是至关重要的。因此,识别退化情景具有重要的研究价值。本文设计了几何信息、约束干扰和残余干扰三种方案来定量识别系统的退化状态和估计退化方向。通过实验验证,所提方案在仿真环境和真实环境的退化场景下都具有良好的识别效果。
{"title":"Recognition of Degradation Scenarios for LiDAR SLAM Applications","authors":"Chenglin Yang, Zihao Chai, Xiaoxiao Yang, Hanyang Zhuang, Ming Yang","doi":"10.1109/ROBIO55434.2022.10011727","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011727","url":null,"abstract":"The SLAM system, which uses 3D LiDAR as the only sensor, is prone to degradation when facing a scenario with sparse structure and fewer constraints. It cannot solve the robot pose based on limited LiDAR constraint information, which leads to the localization failure and mapping failure of the SLAM system. Due to the limitations of LiDAR, it is difficult to only rely on the point cloud data provided by LiDAR to solve the problem of localization and mapping of degraded scenarios. Currently, the mainstream is to provide additional information through multi-sensor fusion and other schemes to restrict and correct the system's attitude. In the multi-source fusion system, it is still essential to determine the information reliability of each sensor source in different directions. Hence, the recognition of the degradation scenario has significant research value. In this paper, three schemes, geometric information, constraint distur-bance, and residual disturbance, are designed to quantitatively identify the degradation state of the system and estimate the degradation direction. Through experimental verification, the proposed schemes have a favorable recognition effect in the degradation scenario of the simulation environment and real environment.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121142959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Experimental Study of Keypoint Descriptor Fusion 关键点描述子融合的实验研究
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011825
Yaling Pan, Li He, Y. Guan, Hong Zhang
Local feature descriptors play a crucial role in computer vision problems, especially robot motion. Existing descriptors are highly accurate, but their performance de-pends on the influence of distracting factors, such as illumi-nation and viewpoint. There is room for further improvement of these descriptors. In this paper, we provide an in-depth analysis of several exciting features of the descriptor fusion model (DFM) we have proposed in our recent work, which uses an autoencoder to combine descriptors and exploit their respective advantages. With this DFM framework, we fur-ther validate that fused descriptors can retain advantageous properties and that our DFM is a generally applicable method with respect to various component descriptors. Specifically, we evaluate multiple combinations of hand-crafted and CNN descriptors concerning their performance on a benchmark dataset with illumination and viewpoint changes to obtain comprehensive experimental results. The results show that the fused descriptors have better matching accuracy than their component descriptors.
局部特征描述子在计算机视觉问题,尤其是机器人运动问题中起着至关重要的作用。现有的描述符精度很高,但其性能受光照和视点等干扰因素的影响。这些描述符还有进一步改进的空间。在本文中,我们深入分析了我们在最近的工作中提出的描述符融合模型(DFM)的几个令人兴奋的特征,该模型使用自编码器来组合描述符并利用它们各自的优势。有了这个DFM框架,我们进一步验证了融合描述符可以保留有利的属性,并且我们的DFM是一种适用于各种组件描述符的通用方法。具体来说,我们评估了手工和CNN描述符的多种组合在光照和视点变化的基准数据集上的性能,以获得全面的实验结果。结果表明,融合描述子比其分量描述子具有更好的匹配精度。
{"title":"An Experimental Study of Keypoint Descriptor Fusion","authors":"Yaling Pan, Li He, Y. Guan, Hong Zhang","doi":"10.1109/ROBIO55434.2022.10011825","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011825","url":null,"abstract":"Local feature descriptors play a crucial role in computer vision problems, especially robot motion. Existing descriptors are highly accurate, but their performance de-pends on the influence of distracting factors, such as illumi-nation and viewpoint. There is room for further improvement of these descriptors. In this paper, we provide an in-depth analysis of several exciting features of the descriptor fusion model (DFM) we have proposed in our recent work, which uses an autoencoder to combine descriptors and exploit their respective advantages. With this DFM framework, we fur-ther validate that fused descriptors can retain advantageous properties and that our DFM is a generally applicable method with respect to various component descriptors. Specifically, we evaluate multiple combinations of hand-crafted and CNN descriptors concerning their performance on a benchmark dataset with illumination and viewpoint changes to obtain comprehensive experimental results. The results show that the fused descriptors have better matching accuracy than their component descriptors.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121306997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Target prediction and temporal localization of grasping action for vision-assisted prosthetic hand 视觉辅助假手抓取动作的目标预测与时间定位
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011751
Xu Shi, Wei Xu, Weichao Guo, X. Sheng
With the development of shared control technology for humanoid prosthetic hands, more and more research is focused on vision-based machine decision making. In this paper, we propose a miniaturized eye-in-hand target object prediction and action decision-making framework for the humanoid hand “approach-grasp” sequence. Our prediction system can simultaneously predict the target object and detect temporal localization of the grasp action. The system is divided into three main modules: feature logging, target filtering and grasp triggering. In this paper, the optimal configuration of the hyper-parameters designed in each module is performed experimentally. We also propose a prediction quality assessment method for “approach-grasp” behavior, including instance level, sequence level and action decision level. With the optimal hyper-parameter configuration, the predicting system perform averagely to 0.854 at instance prediction accuracy (IP), 0.643 at grasp action prediction accuracy (GP). It also has good predictive stability for most classes of objects with number of predicting changes (NPC) below 6.
随着仿人假肢共享控制技术的发展,基于视觉的机器决策越来越受到人们的关注。在本文中,我们提出了一个小型的眼手目标物体预测和行动决策框架,用于人形手“接近-抓取”序列。我们的预测系统可以同时预测目标物体和检测抓取动作的时间定位。系统分为特征记录、目标滤波和抓取触发三个主要模块。本文通过实验对各模块设计的超参数进行了优化配置。提出了一种“接近-把握”行为的预测质量评价方法,包括实例级、序列级和行动决策级。在最优超参数配置下,预测系统的实例预测精度(IP)平均为0.854,抓取动作预测精度(GP)平均为0.643。对于预测变化数(NPC)小于6的大多数类别的对象,该方法也具有良好的预测稳定性。
{"title":"Target prediction and temporal localization of grasping action for vision-assisted prosthetic hand","authors":"Xu Shi, Wei Xu, Weichao Guo, X. Sheng","doi":"10.1109/ROBIO55434.2022.10011751","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011751","url":null,"abstract":"With the development of shared control technology for humanoid prosthetic hands, more and more research is focused on vision-based machine decision making. In this paper, we propose a miniaturized eye-in-hand target object prediction and action decision-making framework for the humanoid hand “approach-grasp” sequence. Our prediction system can simultaneously predict the target object and detect temporal localization of the grasp action. The system is divided into three main modules: feature logging, target filtering and grasp triggering. In this paper, the optimal configuration of the hyper-parameters designed in each module is performed experimentally. We also propose a prediction quality assessment method for “approach-grasp” behavior, including instance level, sequence level and action decision level. With the optimal hyper-parameter configuration, the predicting system perform averagely to 0.854 at instance prediction accuracy (IP), 0.643 at grasp action prediction accuracy (GP). It also has good predictive stability for most classes of objects with number of predicting changes (NPC) below 6.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122313511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Predictive Method for Site Selection in Aquaculture with a Robotic Platform 基于机器人平台的水产养殖选址预测方法
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011913
Tong Shen, Tianqi Zhang, Kai Yuan, Kaiwen Xue, Huihuan Qian
The aquaculture industry significantly impacts human life and social development since it provides excellent resources and continues to grow for our needs. To improve production efficiency and minimize risk, suitable site selection in aquaculture tends to be more desirable. This paper proposes a predictive method based on the environmental sampling information to justify the site condition for aquaculture. A robotic platform is designed to automatically patrol the water body with sensors sampling the environment information to achieve the above-mentioned accomplishment. Based on the obtained data, a machine learning model is trained and further used to assess the probability. Finally, potential sites could be selected for the future aquaculture industry. Both the predictive method and the robotic platform have been tested in an outdoor lake, and the results verified their feasibility. Both the platform and the prediction method could be applied to increase the site selection efficiency, thus promoting the aquaculture industry's development.
水产养殖业对人类生活和社会发展产生了重大影响,因为它提供了优良的资源,并不断满足我们的需求。为了提高生产效率和降低风险,在水产养殖中更需要选择合适的场地。本文提出了一种基于环境采样信息的预测方法来确定养殖场地条件。设计了一个机器人平台,通过传感器采集环境信息,实现对水体的自动巡逻。基于获得的数据,训练机器学习模型并进一步用于评估概率。最后,可以为未来的水产养殖业选择潜在的地点。该预测方法和机器人平台已在室外湖泊中进行了测试,结果验证了其可行性。该平台和预测方法均可提高选址效率,从而促进水产养殖业的发展。
{"title":"A Predictive Method for Site Selection in Aquaculture with a Robotic Platform","authors":"Tong Shen, Tianqi Zhang, Kai Yuan, Kaiwen Xue, Huihuan Qian","doi":"10.1109/ROBIO55434.2022.10011913","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011913","url":null,"abstract":"The aquaculture industry significantly impacts human life and social development since it provides excellent resources and continues to grow for our needs. To improve production efficiency and minimize risk, suitable site selection in aquaculture tends to be more desirable. This paper proposes a predictive method based on the environmental sampling information to justify the site condition for aquaculture. A robotic platform is designed to automatically patrol the water body with sensors sampling the environment information to achieve the above-mentioned accomplishment. Based on the obtained data, a machine learning model is trained and further used to assess the probability. Finally, potential sites could be selected for the future aquaculture industry. Both the predictive method and the robotic platform have been tested in an outdoor lake, and the results verified their feasibility. Both the platform and the prediction method could be applied to increase the site selection efficiency, thus promoting the aquaculture industry's development.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125850070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-Aided Online Terrain Classification for Bipedal Robots Using Augmented Reality 基于增强现实的双足机器人人工辅助在线地形分类
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011705
Zahraa Awad, Celine Chibani, Noel Maalouf, Imad H. Elhajjl
This paper presents an online training system, enhanced with augmented reality, for improving real-time terrain classification by humanoid robots. The real-time terrain type prediction model relies on data acquired from four different sensors (force, position, current, and inertial) of the NAO humanoid robot. We compare the performance of Stochastic Gradient Descent, Passive Aggressive classifier, and Support Vector Machine in predicting the terrain type being traversed. Then, the models are trained online by manually inputting the correct terrain type being traversed to improve the accuracy of the predictions over time. An Augmented Reality (AR) user interface is designed to display the robot diagnostics and terrain type being predicted and obtain the user feedback to correct the terrain type when needed. This allows the user to improve the classification results and enhance the data collection process in the easiest way possible. The experimental results show that the Passive Aggressive classifier is the most successful among the three online classifiers with an accuracy of 81.4%.
本文提出了一种基于增强现实技术的人形机器人实时地形分类在线训练系统。实时地形类型预测模型依赖于从NAO人形机器人的四个不同传感器(力、位置、电流和惯性)获取的数据。我们比较了随机梯度下降、被动攻击分类器和支持向量机在预测被穿越的地形类型方面的性能。然后,通过手动输入所穿越的正确地形类型来在线训练模型,以提高预测的准确性。增强现实(AR)用户界面用于显示机器人诊断和预测的地形类型,并在需要时获得用户反馈以纠正地形类型。这允许用户以最简单的方式改进分类结果并增强数据收集过程。实验结果表明,被动攻击分类器是三种在线分类器中最成功的分类器,准确率为81.4%。
{"title":"Human-Aided Online Terrain Classification for Bipedal Robots Using Augmented Reality","authors":"Zahraa Awad, Celine Chibani, Noel Maalouf, Imad H. Elhajjl","doi":"10.1109/ROBIO55434.2022.10011705","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011705","url":null,"abstract":"This paper presents an online training system, enhanced with augmented reality, for improving real-time terrain classification by humanoid robots. The real-time terrain type prediction model relies on data acquired from four different sensors (force, position, current, and inertial) of the NAO humanoid robot. We compare the performance of Stochastic Gradient Descent, Passive Aggressive classifier, and Support Vector Machine in predicting the terrain type being traversed. Then, the models are trained online by manually inputting the correct terrain type being traversed to improve the accuracy of the predictions over time. An Augmented Reality (AR) user interface is designed to display the robot diagnostics and terrain type being predicted and obtain the user feedback to correct the terrain type when needed. This allows the user to improve the classification results and enhance the data collection process in the easiest way possible. The experimental results show that the Passive Aggressive classifier is the most successful among the three online classifiers with an accuracy of 81.4%.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126816055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MR-GMMExplore: Multi-Robot Exploration System in Unknown Environments based on Gaussian Mixture Model MR-GMMExplore:基于高斯混合模型的未知环境多机器人探索系统
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011789
Yichun Wu, Qiuyi Gu, Jincheng Yu, Guangjun Ge, Jian Wang, Q. Liao, Chun Zhang, Yu Wang
Collaborative exploration in an unknown environ-ment is an essential task for mobile robotic systems. Without external positioning, multi-robot mapping methods have relied on the transfer of place descriptors and sensor data for relative pose estimation, which is not feasible in communication-limited environments. In addition, existing frontier-based exploration strategies are mostly designed for occupancy grid maps, thus failing to use surface information of obstacles in complex three-dimensional scenes. To address these limitations, we utilize Gaussian Mixture Model (GMM) as the map form for both mapping and exploration. We extend our previous mapping work to exploration setting by introducing MR-GMMExplore, a Multi-Robot GMM-based Exploration system in which robots transfer GMM submaps to reduce data transmission and perform exploration directly using the generated GMM map. Specifically, we propose a GMM spatial information extraction strategy that efficiently extracts obstacle probability information from GMM submaps. Then we present a goal selection method that allows robots to explore different areas, and a GMM-based local planner that realizes local planning using GMM maps instead of converting them into grid maps. Simulation results show that the transmission of GMM submaps reduces approximately 96% communication load compared with point clouds and our mean-based extraction strategy is 4 times faster than the traversal-based one. We also conduct comparative experiments to demonstrate the effectiveness of our approach in reducing backtracking paths and enhancing cooperation. MR-GMMExplore is published as an open-source ROS package at https://github.com/efc-robot/gmm_explore.
在未知环境下的协同探索是移动机器人系统的一项重要任务。在没有外部定位的情况下,多机器人映射方法依赖于位置描述符和传感器数据的传递来进行相对姿态估计,这在通信受限的环境中是不可行的。此外,现有的基于边界的探测策略大多是针对占用网格地图设计的,无法利用复杂三维场景中障碍物的表面信息。为了解决这些限制,我们使用高斯混合模型(GMM)作为映射和探索的地图形式。我们将以前的测绘工作扩展到勘探设置,引入了MR-GMMExplore,这是一个基于多机器人GMM的勘探系统,在该系统中,机器人传输GMM子地图以减少数据传输,并直接使用生成的GMM地图进行勘探。具体而言,我们提出了一种从GMM子地图中高效提取障碍概率信息的GMM空间信息提取策略。然后,我们提出了一种允许机器人探索不同区域的目标选择方法,以及一种基于GMM的局部规划器,该规划器使用GMM地图而不是将其转换为网格地图来实现局部规划。仿真结果表明,与点云相比,GMM子图的传输减少了约96%的通信负荷,基于均值的提取策略比基于遍历的提取策略快4倍。我们还进行了对比实验,以证明我们的方法在减少回溯路径和加强合作方面的有效性。MR-GMMExplore作为开源ROS包发布在https://github.com/efc-robot/gmm_explore。
{"title":"MR-GMMExplore: Multi-Robot Exploration System in Unknown Environments based on Gaussian Mixture Model","authors":"Yichun Wu, Qiuyi Gu, Jincheng Yu, Guangjun Ge, Jian Wang, Q. Liao, Chun Zhang, Yu Wang","doi":"10.1109/ROBIO55434.2022.10011789","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011789","url":null,"abstract":"Collaborative exploration in an unknown environ-ment is an essential task for mobile robotic systems. Without external positioning, multi-robot mapping methods have relied on the transfer of place descriptors and sensor data for relative pose estimation, which is not feasible in communication-limited environments. In addition, existing frontier-based exploration strategies are mostly designed for occupancy grid maps, thus failing to use surface information of obstacles in complex three-dimensional scenes. To address these limitations, we utilize Gaussian Mixture Model (GMM) as the map form for both mapping and exploration. We extend our previous mapping work to exploration setting by introducing MR-GMMExplore, a Multi-Robot GMM-based Exploration system in which robots transfer GMM submaps to reduce data transmission and perform exploration directly using the generated GMM map. Specifically, we propose a GMM spatial information extraction strategy that efficiently extracts obstacle probability information from GMM submaps. Then we present a goal selection method that allows robots to explore different areas, and a GMM-based local planner that realizes local planning using GMM maps instead of converting them into grid maps. Simulation results show that the transmission of GMM submaps reduces approximately 96% communication load compared with point clouds and our mean-based extraction strategy is 4 times faster than the traversal-based one. We also conduct comparative experiments to demonstrate the effectiveness of our approach in reducing backtracking paths and enhancing cooperation. MR-GMMExplore is published as an open-source ROS package at https://github.com/efc-robot/gmm_explore.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121504453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SmallRhex: A Fast and Highly-Mobile Hexapod Robot SmallRhex:一个快速和高度移动的六足机器人
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10012013
Wenhui Wang, Wujie Shi, Zerui Li, Weiheng Zhuang, Zheng Zhu, Zhenzhong Jia
Relying on the unique C-leg structure, RHex robots have good mobility and traffic ability when having relatively simple structures. Based on the existing RHex robots, taking into account the performance and cost, this design develops a small hexapod robot, named smallRhex, with low cost but strong performance. This paper mainly introduces the mechanical structure, robot gaits, simulation, and physical performance tests of smallRhex robots. The hardware is mainly based on the raspberry pie microcomputer and Robomaster motor and accessories. The high power density meets the dual requirements of performance and cost. Then cooperates with 3D printing and sheet metal and machined parts processing to complete the design of the mechanical structure and assembly of the robot. At the control level, raspberry pie directly controls the movement of 6 motors. The gait design includes basic motion gait, which includes straight walking and turning, and other gaits of complex motion: stair climbing, jumping, and high obstacle climbing. They are simulated in Webots. Finally, the performance test and the gait test of the robot are carried out. Then the gait design is further optimized, and the basic design of smallRhex robot is completed.
RHex机器人依托独特的c型腿结构,在结构相对简单的情况下,具有良好的移动性和通行能力。在现有RHex机器人的基础上,考虑到性能和成本,本设计开发了一种小型六足机器人,命名为smallRhex,成本低,性能强。本文主要介绍了小型rhex机器人的机械结构、机器人步态、仿真和物理性能测试。硬件主要以覆盆子派微机和Robomaster电机及配件为主。高功率密度满足了性能和成本的双重要求。然后配合3D打印和钣金、机加工零件加工,完成机器人的机械结构设计和装配。在控制层面,覆盆子派直接控制6个电机的运动。步态设计包括基本运动步态,包括直线行走和转身,以及其他复杂运动步态:爬楼梯、跳跃、爬高障碍。它们是在网络机器人中模拟的。最后,对机器人进行了性能测试和步态测试。然后对步态设计进行进一步优化,完成smallRhex机器人的基本设计。
{"title":"SmallRhex: A Fast and Highly-Mobile Hexapod Robot","authors":"Wenhui Wang, Wujie Shi, Zerui Li, Weiheng Zhuang, Zheng Zhu, Zhenzhong Jia","doi":"10.1109/ROBIO55434.2022.10012013","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10012013","url":null,"abstract":"Relying on the unique C-leg structure, RHex robots have good mobility and traffic ability when having relatively simple structures. Based on the existing RHex robots, taking into account the performance and cost, this design develops a small hexapod robot, named smallRhex, with low cost but strong performance. This paper mainly introduces the mechanical structure, robot gaits, simulation, and physical performance tests of smallRhex robots. The hardware is mainly based on the raspberry pie microcomputer and Robomaster motor and accessories. The high power density meets the dual requirements of performance and cost. Then cooperates with 3D printing and sheet metal and machined parts processing to complete the design of the mechanical structure and assembly of the robot. At the control level, raspberry pie directly controls the movement of 6 motors. The gait design includes basic motion gait, which includes straight walking and turning, and other gaits of complex motion: stair climbing, jumping, and high obstacle climbing. They are simulated in Webots. Finally, the performance test and the gait test of the robot are carried out. Then the gait design is further optimized, and the basic design of smallRhex robot is completed.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127576239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prospect of Robot Assisted Maxilla-Mandibula-Complex Reposition in Orthognathic Surgery 机器人辅助上颌复合体复位在正颌外科中的应用前景
Pub Date : 2022-12-05 DOI: 10.1109/ROBIO55434.2022.10011845
Jie Liang, Qianqian Li, Xing Wang, Xiaojing Liu
This paper investigates the feasibility and accuracy of robot assisted maxilla-mandibula-complex (MMC) reposition in orthognathic surgery. A robot system was established by an optical motion capture system and a universal robotic arm. Computer assisted surgical simulation (CASS), image guiding and robotic control software modules were developed according to the specific requirements for orthognathic surgery. The operation work flow includes data acquisition, virtual simulation, registration, osteotomy, robotic assisting bone segments reposition and fixation. The reposition and holding accuracy was tested on skull models. Optical scanner was used to acquire the intraoperative skull morphologies before and after the fixation. A postoperative CT scan was conducted when the fixation is completed. The virtual skull, intraoperative scan data and postoperative CT scan image were superimposed and compared. Error was defined as root mean square (RMS) of MMC on different images. The positioning accuracy was calculated by RMS between surface scan data before fixation and the virtual design skull. The holding accuracy was calculated by RMS between the surface scan before and after fixation. A validation test was conducted on five skull models. The mean accuracy of robotic assisting MMC reposition was 0.75 ±0.69 mm while the mean holding accuracy during the fixation procedure was 1.56±1.2mm. The accuracy of robot assisted MMC reposition was clinical feasible. The holding accuracy during fixation procedure is less satisfactory than that of repositioning. Further investigation is needed to improve the holding solidity of the robotic manipulator.
探讨机器人辅助上颌复合体(MMC)复位在正颌手术中的可行性和准确性。采用光学运动捕捉系统和通用机械臂构建了机器人系统。根据正颌手术的具体要求,开发了计算机辅助手术模拟(CASS)、图像引导和机器人控制软件模块。手术流程包括数据采集、虚拟仿真、配准、截骨、机器人辅助骨段复位和固定。在颅骨模型上测试了复位和保持精度。使用光学扫描仪获取术中颅骨固定前后的形态学信息。固定完成后进行术后CT扫描。将虚拟颅骨、术中扫描数据和术后CT扫描图像进行叠加比较。误差定义为MMC在不同图像上的均方根(RMS)。通过固定前表面扫描数据与虚拟设计颅骨的均方根计算定位精度。采用固定前后表面扫描的均方根值计算固定精度。对5个颅骨模型进行了验证试验。机器人辅助MMC复位的平均精度为0.75±0.69 mm,固定过程中的平均保持精度为1.56±1.2mm。机器人辅助MMC复位的准确性在临床上是可行的。固定过程中的握持精度不如重新定位时令人满意。提高机械手的握持牢固性需要进一步的研究。
{"title":"Prospect of Robot Assisted Maxilla-Mandibula-Complex Reposition in Orthognathic Surgery","authors":"Jie Liang, Qianqian Li, Xing Wang, Xiaojing Liu","doi":"10.1109/ROBIO55434.2022.10011845","DOIUrl":"https://doi.org/10.1109/ROBIO55434.2022.10011845","url":null,"abstract":"This paper investigates the feasibility and accuracy of robot assisted maxilla-mandibula-complex (MMC) reposition in orthognathic surgery. A robot system was established by an optical motion capture system and a universal robotic arm. Computer assisted surgical simulation (CASS), image guiding and robotic control software modules were developed according to the specific requirements for orthognathic surgery. The operation work flow includes data acquisition, virtual simulation, registration, osteotomy, robotic assisting bone segments reposition and fixation. The reposition and holding accuracy was tested on skull models. Optical scanner was used to acquire the intraoperative skull morphologies before and after the fixation. A postoperative CT scan was conducted when the fixation is completed. The virtual skull, intraoperative scan data and postoperative CT scan image were superimposed and compared. Error was defined as root mean square (RMS) of MMC on different images. The positioning accuracy was calculated by RMS between surface scan data before fixation and the virtual design skull. The holding accuracy was calculated by RMS between the surface scan before and after fixation. A validation test was conducted on five skull models. The mean accuracy of robotic assisting MMC reposition was 0.75 ±0.69 mm while the mean holding accuracy during the fixation procedure was 1.56±1.2mm. The accuracy of robot assisted MMC reposition was clinical feasible. The holding accuracy during fixation procedure is less satisfactory than that of repositioning. Further investigation is needed to improve the holding solidity of the robotic manipulator.","PeriodicalId":151112,"journal":{"name":"2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128933186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2022 IEEE International Conference on Robotics and Biomimetics (ROBIO)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1