首页 > 最新文献

Journal of Field Robotics最新文献

英文 中文
Uncertainty-aware LiDAR-based localization for outdoor mobile robots 基于不确定性感知激光雷达的户外移动机器人定位系统
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-09 DOI: 10.1002/rob.22392
Geonhyeok Park, Woojin Chung

Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.

对于自主移动机器人来说,准确而稳健的定位至关重要。基于光探测和测距(LiDAR)传感器的地图匹配技术已被广泛用于估算机器人的全球位置。然而,当环境发生变化或缺乏足够的特征时,地图匹配性能就会下降。不加区分地采用不准确的地图匹配姿势进行定位,会大大降低姿势估计的可靠性。本文旨在开发一种基于地图匹配的鲁棒激光雷达定位方法。我们的重点是根据地图匹配姿势的不确定性来确定适当的权重。地图匹配姿势的不确定性由姿势的概率分布估算得出。我们利用正态分布变换地图来推导概率分布。利用因子图将地图匹配姿势、激光雷达-惯性里程测量和全球导航卫星系统信息结合起来。实验验证在大学校园室外的三个不同场景中成功进行,每个场景都涉及变化或动态环境。我们将所提方法的性能与三种基于激光雷达的定位方法进行了比较。实验结果表明,在各种室外环境中,即使地图匹配姿势不准确,也能实现稳健的定位性能。实验视频见 https://youtu.be/L6p8gwxn4ak。
{"title":"Uncertainty-aware LiDAR-based localization for outdoor mobile robots","authors":"Geonhyeok Park,&nbsp;Woojin Chung","doi":"10.1002/rob.22392","DOIUrl":"10.1002/rob.22392","url":null,"abstract":"<p>Accurate and robust localization is essential for autonomous mobile robots. Map matching based on Light Detection and Ranging (LiDAR) sensors has been widely adopted to estimate the global location of robots. However, map-matching performance can be degraded when the environment changes or when sufficient features are unavailable. Indiscriminately incorporating inaccurate map-matching poses for localization can significantly decrease the reliability of pose estimation. This paper aims to develop a robust LiDAR-based localization method based on map matching. We focus on determining appropriate weights that are computed from the uncertainty of map-matching poses. The uncertainty of map-matching poses is estimated by the probability distribution over the poses. We exploit the normal distribution transform map to derive the probability distribution. A factor graph is employed to combine the map-matching pose, LiDAR-inertial odometry, and global navigation satellite system information. Experimental verification was successfully conducted outdoors on the university campus in three different scenarios, each involving changing or dynamic environments. We compared the performance of the proposed method with three LiDAR-based localization methods. The experimental results show that robust localization performances can be achieved even when map-matching poses are inaccurate in various outdoor environments. The experimental video can be found at https://youtu.be/L6p8gwxn4ak.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2790-2804"},"PeriodicalIF":4.2,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22392","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimising robotic operation speed with edge computing via 5G network: Insights from selective harvesting robots 通过 5G 网络的边缘计算优化机器人运行速度:选择性收割机器人的启示
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-05 DOI: 10.1002/rob.22384
Usman A. Zahidi, Arshad Khan, Tsvetan Zhivkov, Johann Dichtl, Dom Li, Soran Parsa, Marc Hanheide, Grzegorz Cielniak, Elizabeth I. Sklar, Simon Pearson, Amir Ghalamzan-E.

Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the edge-server over 5G for Selective Harvesting (E5SH) system, which is an integration of high bandwidth and low latency Fifth-Generation (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as Message Queuing Telemetry Transport and Transport Control Protocol Robot Operating System. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.

自主机器人的选择性收割将成为未来农业的一项关键技术。通货膨胀的加剧和熟练劳动力的短缺是促使用户接受机器人收割的驱动因素。例如,机器人收获草莓需要实时高精度的果实定位、三维(3D)绘图和三维集群操作的路径规划。虽然工业界和学术界已开发出多种草莓采摘机器人,但还没有一种机器人能达到与人力成本持平的水平。要实现这一目标,就必须提高采摘速度(感知、控制和移动)、精确度,并开发低成本的机器人系统设计。我们提出了用于选择性收割的 5G 边缘服务器(E5SH)系统,该系统将高带宽、低延迟的第五代(5G)移动网络集成到农作物收割机器人平台中,我们将其视为未来机器人收割系统的推动力。我们还将处理规模和速度与系统环境和能源成本结合起来考虑。在一系列实验量化结果的支持下,我们介绍并评估了系统架构,这些实验比较了系统在不同架构选择下的性能,包括图像分割模型、网络基础设施(5G 与无线保真)以及消息传输协议,如消息队列遥测传输和传输控制协议机器人操作系统。我们的研究结果表明,E5SH 系统的峰值处理性能比独立嵌入式计算 Nvidia Jetson Xavier NX 系统提高了 18 倍以上。
{"title":"Optimising robotic operation speed with edge computing via 5G network: Insights from selective harvesting robots","authors":"Usman A. Zahidi,&nbsp;Arshad Khan,&nbsp;Tsvetan Zhivkov,&nbsp;Johann Dichtl,&nbsp;Dom Li,&nbsp;Soran Parsa,&nbsp;Marc Hanheide,&nbsp;Grzegorz Cielniak,&nbsp;Elizabeth I. Sklar,&nbsp;Simon Pearson,&nbsp;Amir Ghalamzan-E.","doi":"10.1002/rob.22384","DOIUrl":"10.1002/rob.22384","url":null,"abstract":"<p>Selective harvesting by autonomous robots will be a critical enabling technology for future farming. Increases in inflation and shortages of skilled labor are driving factors that can help encourage user acceptability of robotic harvesting. For example, robotic strawberry harvesting requires real-time high-precision fruit localization, three-dimensional (3D) mapping, and path planning for 3D cluster manipulation. Whilst industry and academia have developed multiple strawberry harvesting robots, none have yet achieved human–cost parity. Achieving this goal requires increased picking speed (perception, control, and movement), accuracy, and the development of low-cost robotic system designs. We propose the <i>edge-server over 5G for Selective Harvesting</i> (E5SH) system, which is an integration of high bandwidth and low latency <i>Fifth-Generation</i> (5G) mobile network into a crop harvesting robotic platform, which we view as an enabler for future robotic harvesting systems. We also consider processing scale and speed in conjunction with system environmental and energy costs. A system architecture is presented and evaluated with support from quantitative results from a series of experiments that compare the performance of the system in response to different architecture choices, including image segmentation models, network infrastructure (5G vs. Wireless Fidelity), and messaging protocols, such as <i>Message Queuing Telemetry Transport</i> and <i>Transport Control Protocol Robot Operating System</i>. Our results demonstrate that the E5SH system delivers step-change peak processing performance speedup of above 18-fold than a standalone embedded computing Nvidia Jetson Xavier NX system.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2771-2789"},"PeriodicalIF":4.2,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22384","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141571734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semihierarchical reconstruction and weak-area revisiting for robotic visual seafloor mapping 用于机器人视觉海底测绘的半层次重建和弱区重访
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-04 DOI: 10.1002/rob.22390
Mengkun She, Yifan Song, David Nakath, Kevin Köser

Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.

尽管近几十年来许多陆地视觉制图算法取得了令人印象深刻的成果,但由于环境条件恶劣,将这些方法从陆地转移到深海仍然是一项挑战。配备了高分辨率照相机和人工照明系统的自动水下航行器所捕获的图像,往往会受到异质照明以及光线折射造成的衰减和散射等因素的影响而质量下降。这些挑战往往导致陆地上的同步定位和绘图(SLAM)方法在水下应用时失败,或导致运动结构(SfM)方法出现漂移或遗漏具有挑战性的图像。因此,这会导致空白、跳跃或重建区域薄弱。在这项工作中,我们提出了一种导航辅助分层重建方法,以促进自动机器人对数百公顷的海底进行三维重建。我们的分层方法结合了 SLAM 和全局 SfM 的优势,比增量 SfM 更有效,同时确保了全局地图的完整性和一致性。这是通过识别和重访有问题或重建薄弱的区域、避免遗漏图像以及更好地利用有限的潜水时间来实现的。拟议的系统已在几次研究航行中进行了广泛的测试和评估,证明了其在实际条件下的稳健性和实用性。
{"title":"Semihierarchical reconstruction and weak-area revisiting for robotic visual seafloor mapping","authors":"Mengkun She,&nbsp;Yifan Song,&nbsp;David Nakath,&nbsp;Kevin Köser","doi":"10.1002/rob.22390","DOIUrl":"10.1002/rob.22390","url":null,"abstract":"<p>Despite impressive results achieved by many on-land visual mapping algorithms in the recent decades, transferring these methods from land to the deep sea remains a challenge due to harsh environmental conditions. Images captured by autonomous underwater vehicles, equipped with high-resolution cameras and artificial illumination systems, often suffer from heterogeneous illumination and quality degradation caused by attenuation and scattering, on top of refraction of light rays. These challenges often result in the failure of on-land Simultaneous Localization and Mapping (SLAM) approaches when applied underwater or cause Structure-from-Motion (SfM) approaches to exhibit drifting or omit challenging images. Consequently, this leads to gaps, jumps, or weakly reconstructed areas. In this work, we present a navigation-aided hierarchical reconstruction approach to facilitate the automated robotic three-dimensional reconstruction of hectares of seafloor. Our hierarchical approach combines the advantages of SLAM and global SfM that are much more efficient than incremental SfM, while ensuring the completeness and consistency of the global map. This is achieved through identifying and revisiting problematic or weakly reconstructed areas, avoiding to omit images and making better use of limited dive time. The proposed system has been extensively tested and evaluated during several research cruises, demonstrating its robustness and practicality in real-world conditions.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2749-2770"},"PeriodicalIF":4.2,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and field evaluation of a VR/AR-based remotely controlled system for a two-wheel paddy transplanter 为双轮水稻插秧机开发基于 VR/AR 的远程控制系统并进行实地评估
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-02 DOI: 10.1002/rob.22389
Shiv Kumar Lohan, Mahesh Kumar Narang, Parmar Raghuvirsinh, Santosh Kumar, Lakhwinder Pal Singh

Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h1 for walk-behind types and 2.00–2.25 km h1 for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (p < 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.

传统上,操作双轮水稻插秧机会给农场工人带来体力和认知工作量方面的挑战,尤其是在地头转弯时。本研究介绍了一种基于虚拟现实(VR)/增强现实(AR)的双轮水稻插秧机遥控系统,以解决这些问题。该系统以 VR 界面取代手动控制,集成了齿轮电机和电子控制单元。前视和后视摄像头在发光二极管屏幕上提供实时现场感知,通过自动驾驶控制器和实时运动学全球卫星导航系统模块显示路径轨迹。人类操作员使用手持遥控器操纵机器,同时观察实时摄像机画面和路径导航轨迹。研究发现,前进速度必须在可控范围内进行优化,步行式系统为 1.75-2.00 km h-1,遥控系统为 2.00-2.25 km h-1。虽然更高的速度可提高 11.67%-12.95% 的田间作业能力,但也导致田间效率降低 0.74%-1.17%。此外,操作员生理工作量分析表明,步行式和遥控式操作员之间存在显著差异。步行式和遥控式插秧机的能量消耗率(EER)存在显著差异,EER 值分别为 8.20 ± 0.80 至 27.67 ± 0.45 kJ min-¹ 和 7.56 ± 0.55 至 9.72 ± 0.37 kJ min-¹(p <0.05)。总之,基于 VR 的遥控系统有望在水稻插秧作业中提高作业效率并减轻体力负担。
{"title":"Development and field evaluation of a VR/AR-based remotely controlled system for a two-wheel paddy transplanter","authors":"Shiv Kumar Lohan,&nbsp;Mahesh Kumar Narang,&nbsp;Parmar Raghuvirsinh,&nbsp;Santosh Kumar,&nbsp;Lakhwinder Pal Singh","doi":"10.1002/rob.22389","DOIUrl":"10.1002/rob.22389","url":null,"abstract":"<p>Operating a two-wheel paddy transplanter traditionally poses physical strain and cognitive workload challenges for farm workers, especially during headland turns. This study introduces a virtual reality (VR)/augmented reality (AR)based remote-control system for a two-wheel paddy transplanter to resolve these issues. The system replaces manual controls with VR interfaces, integrating gear motors and an electronic control unit. Front and rear-view cameras provide real-time field perception on light-emitting diode screens, displaying path trajectories via an autopilot controller and real-time kinematic global navigation satellite systems module. Human operators manipulate the machine using a hand-held remote controller while observing live camera feeds and path navigation trajectories. The study found that forward speed necessitated optimization within manageable limits of 1.75–2.00 km h<sup>−</sup><sup>1</sup> for walk-behind types and 2.00–2.25 km h<sup>−</sup><sup>1</sup> for remote-controlled systems. While higher speeds enhanced field capacity by 11.67%–12.95%, they also resulted in 0.74%–1.17% lower field efficiency. Additionally, Operators' physiological workload analysis revealed significant differences between walk-behind and remotely controlled operators. Significant differences in energy expenditure rate (EER) were observed between walk-behind and remote-controlled paddy transplanters, with EER values ranging from 8.20 ± 0.80 to 27.67 ± 0.45 kJ min⁻¹ and 7.56 ± 0.55 to 9.72 ± 0.37 kJ min⁻¹, respectively (<i>p</i> &lt; 0.05). Overall, the VR-based remote-control system shows promise in enhancing operational efficiency and reducing physical strain in paddy transplanting operations.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2732-2748"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multitarget adaptive virtual fixture based on task learning for hydraulic manipulator 基于液压机械手任务学习的多目标自适应虚拟夹具
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-02 DOI: 10.1002/rob.22386
Min Cheng, Renming Li, Ruqi Ding, Bing Xu

Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.

由于非结构化的危险环境,由液压机械手执行的重型建筑任务极具挑战性。考虑到许多任务具有准竞争特征(如周期性材料处理或挖掘),我们提出了一种基于远程操作的多目标自适应虚拟夹具(MAVF)方法,该方法通过从演示中学习,对主控器产生在线可变辅助力,从而提高任务效率和安全性。首先,学习散落物料的拾取示范轨迹,提取其分布并生成标称轨迹。然后,通过定义的非线性可变刚度和与标称轨迹的位置偏差,建立并在线调整 MAVF。为了确保被动性和稳定性,引入了一个能量槽来调节刚度。以无虚拟夹具(VF)辅助和传统加权自适应 VF 的运行模式为对比,进行了有时间延迟和无时间延迟的两组测试,以验证所提出的方法。
{"title":"Multitarget adaptive virtual fixture based on task learning for hydraulic manipulator","authors":"Min Cheng,&nbsp;Renming Li,&nbsp;Ruqi Ding,&nbsp;Bing Xu","doi":"10.1002/rob.22386","DOIUrl":"10.1002/rob.22386","url":null,"abstract":"<p>Heavy-duty construction tasks implemented by hydraulic manipulators are highly challenging due to unstructured hazardous environments. Considering many tasks have quasirepetitive features (such as cyclic material handling or excavation), a multitarget adaptive virtual fixture (MAVF) method by teleoperation-based learning from demonstration is proposed to improve task efficiency and safety, by generating an online variable assistance force on the master. First, the demonstration trajectory of picking scattered materials is learned to extract its distribution and the nominal trajectory is generated. Then, the MAVF is established and adjusted online by a defined nonlinear variable stiffness and position deviation from the nominal trajectory. An energy tank is introduced to regulate the stiffness so that passivity and stability can be ensured. Taking the operation mode without virtual fixture (VF) assistance and with traditional weighted adaptation VF as comparisons, two groups of tests with and without time delay were carried out to validate the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2715-2731"},"PeriodicalIF":4.2,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141496224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simulation-assisted point cloud segmentation neural network for human–robot interaction applications 用于人机交互应用的仿真辅助点云分割神经网络
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22385
Jingxin Lin, Kaifan Zhong, Tao Gong, Xianmin Zhang, Nianfeng Wang

With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.

随着工业自动化的发展,人机交互(HRI)的频率显著增加,因此在整个过程中必须高度重视确保人身安全。本文提出了一种仿真辅助神经网络,用于 HRI 中的点云分割,特别是将人与周围的各种物体区分开来。在人机交互过程中,背景物体的位置和机器人的姿态等容易获取的先验信息可以生成模拟点云,帮助进行点云分割。模拟辅助神经网络利用模拟点云和实际点云作为双重输入。网络中的仿真辅助边缘卷积模块有助于将实际点云和模拟点云的特征结合起来,更新实际点云的特征以纳入仿真信息。在工业环境中进行的点云分割实验验证了所提方法的有效性。
{"title":"A simulation-assisted point cloud segmentation neural network for human–robot interaction applications","authors":"Jingxin Lin,&nbsp;Kaifan Zhong,&nbsp;Tao Gong,&nbsp;Xianmin Zhang,&nbsp;Nianfeng Wang","doi":"10.1002/rob.22385","DOIUrl":"10.1002/rob.22385","url":null,"abstract":"<p>With the advancement of industrial automation, the frequency of human–robot interaction (HRI) has significantly increased, necessitating a paramount focus on ensuring human safety throughout this process. This paper proposes a simulation-assisted neural network for point cloud segmentation in HRI, specifically distinguishing humans from various surrounding objects. During HRI, readily accessible prior information, such as the positions of background objects and the robot's posture, can generate a simulated point cloud and assist in point cloud segmentation. The simulation-assisted neural network utilizes simulated and actual point clouds as dual inputs. A simulation-assisted edge convolution module in the network facilitates the combination of features from the actual and simulated point clouds, updating the features of the actual point cloud to incorporate simulation information. Experiments of point cloud segmentation in industrial environments verify the efficacy of the proposed method.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2689-2704"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-dimensional kinematics-based real-time localization method using two robots 基于三维运动学的双机器人实时定位方法
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22383
Guy Elmakis, Matan Coronel, David Zarrouk

This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).

本文介绍了一种精确的双机器人协作三维(3D)自定位方法,该方法依赖于单个旋转摄像头和用于测量机器人倾斜度的机载加速度计。该方法可在全球定位系统被屏蔽的环境中,以及在存在磁场干扰或相对(或完全)黑暗、无结构、无标记的地点进行定位。一个机器人每一步都向前移动,而另一个机器人则保持静止。通过加速度计获得的机器人倾斜角度以及与视频分析相关的炮塔旋转角度,可以连续计算出每个机器人的位置。我们介绍了用于实验的硬件设置,并详细说明了融合加速度计和摄像头获取的数据并在机载微型计算机上实时运行的算法。最后,我们介绍了二维和三维实验结果,结果表明该系统的总行驶距离精确度达到了 2%(见佐证资料 S1:视频)。
{"title":"Three-dimensional kinematics-based real-time localization method using two robots","authors":"Guy Elmakis,&nbsp;Matan Coronel,&nbsp;David Zarrouk","doi":"10.1002/rob.22383","DOIUrl":"10.1002/rob.22383","url":null,"abstract":"<p>This paper presents a precise two-robot collaboration method for three-dimensional (3D) self-localization relying on a single rotating camera and onboard accelerometers used to measure the tilt of the robots. This method allows for localization in global positioning system-denied environments and in the presence of magnetic interference or relatively (or totally) dark and unstructured unmarked locations. One robot moves forward on each step while the other remains stationary. The tilt angles of the robots obtained from the accelerometers and the rotational angle of the turret, associated with the video analysis, make it possible to continuously calculate the location of each robot. We describe a hardware setup used for experiments and provide a detailed description of the algorithm that fuses the data obtained by the accelerometers and cameras and runs in real-time on onboard microcomputers. Finally, we present 2D and 3D experimental results, which show that the system achieves 2% accuracy for the total traveled distance (see Supporting Information S1: video).</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2676-2688"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22383","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Soft crawling caterpillar driven by electrohydrodynamic pumps 由电动流体动力泵驱动的软履带履带车
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-07-01 DOI: 10.1002/rob.22388
Tianyu Zhao, Cheng Wang, Zhongbao Luo, Weiqi Cheng, Nan Xiang

Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.

软爬行机器人通常由笨重而复杂的外部气动或液压致动器驱动。在这项工作中,我们提出了一种基于电流体动力(EHD)泵的微型软爬行履带式机器人。毛毛虫主要由提供驱动力的柔性 EHD 泵、执行爬行的人造肌肉、储液器以及多个稳定器和辅助脚组成。为了使我们的履带式机器人获得更好的爬行性能,我们采用了弧形电极设计,从而提高了 EHD 泵的流速和压力。为了获得更好的性能,还进一步优化了 EHD 泵的电极间隙、电极重叠长度、通道高度、电极厚度和电极对数。与使用传统直电极的 EHD 泵相比,我们的 EHD 泵的驱动压力提高了 50%,流速提高了 60%。我们还对人造肌肉的弯曲能力进行了鉴定,结果显示其最大弯曲角度超过 50°。然后,还测试了软爬行毛毛虫的爬行能力。最后,我们的毛毛虫具有制造简单、成本低廉、移动速度快、占地面积小等优点,在实际应用中,特别是在各种地形上,具有强大而广泛的潜力。
{"title":"Soft crawling caterpillar driven by electrohydrodynamic pumps","authors":"Tianyu Zhao,&nbsp;Cheng Wang,&nbsp;Zhongbao Luo,&nbsp;Weiqi Cheng,&nbsp;Nan Xiang","doi":"10.1002/rob.22388","DOIUrl":"10.1002/rob.22388","url":null,"abstract":"<p>Soft crawling robots are usually driven by bulky and complex external pneumatic or hydraulic actuators. In this work, we proposed a miniaturized soft crawling caterpillar based on electrohydrodynamic (EHD) pumps. The caterpillar was mainly composed of a flexible EHD pump for providing the driving force, an artificial muscle for performing the crawling, a fluid reservoir, and several stabilizers and auxiliary feet. To achieve better crawling performances for our caterpillar, the flow rate and pressure of the EHD pump were improved by using a curved electrode design. The electrode gap, electrode overlap length, channel height, electrode thickness, and electrode pair number of the EHD pump were further optimized for better performance. Compared with the EHD pumps with conventional straight electrodes, our EHD pump showed a 50% enhancement in driving pressure and a 60% increase in flow rate. The bending capability of the artificial muscles was also characterized, showing a maximum bending angle of over 50°. Then, the crawling ability of the soft crawling caterpillar is also tested. Finally, our caterpillar owns the advantages of simple fabrication, low-cost, fast movement speed, and small footprint, which has robust and wide potential for practical use, especially over various terrains.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2705-2714"},"PeriodicalIF":4.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141532743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous navigation method based on RGB-D camera for a crop phenotyping robot 基于 RGB-D 摄像头的作物表型机器人自主导航方法
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-30 DOI: 10.1002/rob.22379
Meng Yang, Chenglong Huang, Zhengda Li, Yang Shao, Jinzhan Yuan, Wanneng Yang, Peng Song

Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (α) and lateral deviation (d). Fuzzy controllers were designed with α and d as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.

表型机器人具有大规模、高通量采集作物表型性状的潜力。表型机器人的自主导航技术可显著提高表型性状采集的效率。本研究利用 RGB-D 摄像机开发了一种自主导航方法,专门用于田间环境中的表型机器人。由于 PP-LiteSeg 语义分割模型具有实时、准确的分割能力,因此本研究采用了该模型,以便在 RGB-D 摄像机拍摄的图像中区分作物区域。从这些分割区域中提取导航特征点,并根据像素和深度信息确定其三维坐标,从而便于计算角度偏差(α)和横向偏差(d)。以 α 和 d 为输入,设计了模糊控制器,用于在表型机器人行走过程中进行实时偏差校正。此外,该方法还包括基于可见光和深度数据的行末识别和行间距计算,从而实现自动转弯和行间距转换。实验结果表明,所采用的 PP-LiteSeg 语义分割模型的测试准确率为 95.379%,平均交叉比结合率为 90.615%。机器人导航的平均行走偏差为 1.33 厘米,最大偏差为 3.82 厘米。此外,行间距测量的平均误差为 2.71 厘米,而在行尾进行行过渡的成功率为 100%。这些结果表明,所提出的方法为表型机器人的自主操作提供了有效支持。
{"title":"Autonomous navigation method based on RGB-D camera for a crop phenotyping robot","authors":"Meng Yang,&nbsp;Chenglong Huang,&nbsp;Zhengda Li,&nbsp;Yang Shao,&nbsp;Jinzhan Yuan,&nbsp;Wanneng Yang,&nbsp;Peng Song","doi":"10.1002/rob.22379","DOIUrl":"10.1002/rob.22379","url":null,"abstract":"<p>Phenotyping robots have the potential to obtain crop phenotypic traits on a large scale with high throughput. Autonomous navigation technology for phenotyping robots can significantly improve the efficiency of phenotypic traits collection. This study developed an autonomous navigation method utilizing an RGB-D camera, specifically designed for phenotyping robots in field environments. The PP-LiteSeg semantic segmentation model was employed due to its real-time and accurate segmentation capabilities, enabling the distinction of crop areas in images captured by the RGB-D camera. Navigation feature points were extracted from these segmented areas, with their three-dimensional coordinates determined from pixel and depth information, facilitating the computation of angle deviation (<i>α</i>) and lateral deviation (<i>d</i>). Fuzzy controllers were designed with <i>α</i> and <i>d</i> as inputs for real-time deviation correction during the walking of phenotyping robot. Additionally, the method includes end-of-row recognition and row spacing calculation, based on both visible and depth data, enabling automatic turning and row transition. The experimental results showed that the adopted PP-LiteSeg semantic segmentation model had a testing accuracy of 95.379% and a mean intersection over union of 90.615%. The robot's navigation demonstrated an average walking deviation of 1.33 cm, with a maximum of 3.82 cm. Additionally, the average error in row spacing measurement was 2.71 cm, while the success rate of row transition at the end of the row was 100%. These findings indicate that the proposed method provides effective support for the autonomous operation of phenotyping robots.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2663-2675"},"PeriodicalIF":4.2,"publicationDate":"2024-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/rob.22379","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141525510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and movement mechanism analysis of a multiple degree of freedom bionic crocodile robot based on the characteristic of “death roll” 基于 "死亡翻滚 "特性的多自由度仿生鳄鱼机器人的设计与运动机理分析
IF 4.2 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-06-21 DOI: 10.1002/rob.22380
Chujun Liu, Jingwei Wang, Zhongyang Liu, Zejia Zhao, Guoqing Zhang

This paper introduces a multi-degree of freedom bionic crocodile robot designed to tackle the challenge of cleaning pollutants and debris from the surfaces of narrow, shallow rivers. The robot mimics the “death roll” motion of crocodiles which is a technique used for object disintegration. First, the design incorporated a swinging tail mechanism using a multi-section oscillating guide-bar mechanism. By analyzing three-, four-, and five-section tail structures, the four-section tail was identified as the most effective structure, offering optimal strength and swing amplitude. Each section of the tail can reach maximum swing angles of 8.05°, 20.95°, 35.09°, and 43.84°, respectively, under a single motor's drive. Next, the robotic legs were designed with a double parallelogram mechanism, facilitating both crawling and retracting movements. In addition, the mouth employed a double-rocker mechanism for efficient closure and locking, achieving an average torque of 5.69 N m with a motor torque of 3.92 N m. Moreover, the robotic body was designed with upper and lower segment structures and waterproofing function was also considered. Besides, the kinematic mechanism and mechanical properties of the bionic crocodile structure were analyzed from the perspectives of modeling and field tests. The results demonstrated an exceptional kinematic performance of the bionic crocodile robot, effectively replicating the authentic movement characteristics of a crocodile.

本文介绍了一种多自由度仿生鳄鱼机器人,旨在应对清理狭窄浅水河流表面污染物和杂物的挑战。该机器人模仿鳄鱼的 "死亡翻滚 "动作,这是一种用于分解物体的技术。首先,设计采用了多节摆动导杆机构的摆尾机制。通过分析三节、四节和五节尾翼结构,确定四节尾翼是最有效的结构,具有最佳的强度和摆动幅度。在单电机驱动下,每节尾翼的最大摆动角度分别为 8.05°、20.95°、35.09° 和 43.84°。其次,机器人腿部采用双平行四边形机构设计,便于爬行和缩回运动。此外,机器人嘴部采用了双摇杆机构,可有效闭合和锁定,在电机扭矩为 3.92 N m 的情况下,平均扭矩达到 5.69 N m。此外,还从建模和现场测试的角度分析了仿生鳄鱼结构的运动机理和机械性能。结果表明,仿生鳄鱼机器人的运动学性能优异,能有效复制鳄鱼的真实运动特征。
{"title":"Design and movement mechanism analysis of a multiple degree of freedom bionic crocodile robot based on the characteristic of “death roll”","authors":"Chujun Liu,&nbsp;Jingwei Wang,&nbsp;Zhongyang Liu,&nbsp;Zejia Zhao,&nbsp;Guoqing Zhang","doi":"10.1002/rob.22380","DOIUrl":"10.1002/rob.22380","url":null,"abstract":"<p>This paper introduces a multi-degree of freedom bionic crocodile robot designed to tackle the challenge of cleaning pollutants and debris from the surfaces of narrow, shallow rivers. The robot mimics the “death roll” motion of crocodiles which is a technique used for object disintegration. First, the design incorporated a swinging tail mechanism using a multi-section oscillating guide-bar mechanism. By analyzing three-, four-, and five-section tail structures, the four-section tail was identified as the most effective structure, offering optimal strength and swing amplitude. Each section of the tail can reach maximum swing angles of 8.05°, 20.95°, 35.09°, and 43.84°, respectively, under a single motor's drive. Next, the robotic legs were designed with a double parallelogram mechanism, facilitating both crawling and retracting movements. In addition, the mouth employed a double-rocker mechanism for efficient closure and locking, achieving an average torque of 5.69 N m with a motor torque of 3.92 N m. Moreover, the robotic body was designed with upper and lower segment structures and waterproofing function was also considered. Besides, the kinematic mechanism and mechanical properties of the bionic crocodile structure were analyzed from the perspectives of modeling and field tests. The results demonstrated an exceptional kinematic performance of the bionic crocodile robot, effectively replicating the authentic movement characteristics of a crocodile.</p>","PeriodicalId":192,"journal":{"name":"Journal of Field Robotics","volume":"41 8","pages":"2650-2662"},"PeriodicalIF":4.2,"publicationDate":"2024-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141529118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Field Robotics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1