首页 > 最新文献

Robotics and Autonomous Systems最新文献

英文 中文
MirrorNet: Hallucinating 2.5D depth images for efficient 3D scene reconstruction MirrorNet:幻觉2.5D深度图像,用于高效的3D场景重建
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-27 DOI: 10.1016/j.robot.2025.105321
Rafał Staszak, Bartlomiej Kulecki, Marek Kraft, Dominik Belter
Robots face challenges in perceiving new scenes, particularly when registering objects from a single perspective, resulting in incomplete shape information about objects. Partial object models negatively influence the performance of grasping methods. To address this, robots can scan the scene from various perspectives or employ methods to directly fill in unknown regions. This research reexamines scene reconstruction typically formulated in 3D space, proposing a novel formulation in 2D image space for robots with RGB-D cameras. We introduce a method that generates a depth image from a virtual camera pose located on the opposite position of the reconstructed object. The article demonstrates that the convolutional neural network can be trained for accurate depth image generation and subsequent 3D scene reconstruction from a single viewpoint. We show that the proposed approach is computationally efficient and accurate when compared to methods that operate directly in 3D space. Furthermore, we illustrate the application of this model in enhancing grasping method success rates.
机器人在感知新场景时面临挑战,特别是在从单一角度注册物体时,导致物体的形状信息不完整。局部对象模型会对抓取方法的性能产生负面影响。为了解决这个问题,机器人可以从不同的角度扫描场景,或者采用直接填充未知区域的方法。本研究重新审视了通常在3D空间中制定的场景重建,提出了一种用于具有RGB-D相机的机器人的2D图像空间中的新公式。我们介绍了一种从位于重建对象相反位置的虚拟摄像机姿态生成深度图像的方法。本文证明了卷积神经网络可以训练为精确的深度图像生成和随后的3D场景重建从单一视点。我们表明,与直接在3D空间中操作的方法相比,所提出的方法具有计算效率和准确性。此外,我们还说明了该模型在提高抓取方法成功率方面的应用。
{"title":"MirrorNet: Hallucinating 2.5D depth images for efficient 3D scene reconstruction","authors":"Rafał Staszak,&nbsp;Bartlomiej Kulecki,&nbsp;Marek Kraft,&nbsp;Dominik Belter","doi":"10.1016/j.robot.2025.105321","DOIUrl":"10.1016/j.robot.2025.105321","url":null,"abstract":"<div><div>Robots face challenges in perceiving new scenes, particularly when registering objects from a single perspective, resulting in incomplete shape information about objects. Partial object models negatively influence the performance of grasping methods. To address this, robots can scan the scene from various perspectives or employ methods to directly fill in unknown regions. This research reexamines scene reconstruction typically formulated in 3D space, proposing a novel formulation in 2D image space for robots with RGB-D cameras. We introduce a method that generates a depth image from a virtual camera pose located on the opposite position of the reconstructed object. The article demonstrates that the convolutional neural network can be trained for accurate depth image generation and subsequent 3D scene reconstruction from a single viewpoint. We show that the proposed approach is computationally efficient and accurate when compared to methods that operate directly in 3D space. Furthermore, we illustrate the application of this model in enhancing grasping method success rates.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105321"},"PeriodicalIF":5.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced flexibility and dexterity in robotic endoscopy via a 6-DOF parallel mechanism and eye-gaze-assisted field-of-view control 通过六自由度并联机构和眼球辅助视场控制,提高了机器人内窥镜的灵活性和灵活性
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-27 DOI: 10.1016/j.robot.2025.105322
Mengtang Li , Shen Zhao , Shuai Wang , Fanmao Liu
In conventional minimally invasive surgery, an assistant manually steers the endoscope based on the surgeon’s verbal commands, but fatigue and tremor can degrade field-of-view (FOV) stability and efficiency. Robotic endoscopes address this limitation through automated FOV adjustment via image-based visual servoing, ensuring smooth and stable visualization. However, most robotic implementations mount rigid straight-rod endoscopes on external serial arms, limiting dexterity and complicating remote-center-of-motion (RCM) control. Moreover, many automated FOV methods track surgical-tool tips without representing the surgeon’s intention. This work therefore presents a compact 6-DOF parallel endoscopic mechanism that improves flexibility and dexterity while simplifying RCM constraint satisfaction, together with an eye-gaze-assisted multi-tool tracking controller that dynamically weights tools according to surgeon attention. Simulations and experiments across diverse scenarios demonstrate FOV stabilization within 2 s, mean image-space tracking error < 20 pixels, eye hand error < 3°, and at least a 30% reduction in unnecessary FOV adjustments. Supplementary video is available.
在传统的微创手术中,助手根据外科医生的口头指令手动操纵内窥镜,但疲劳和震颤会降低视野(FOV)的稳定性和效率。机器人内窥镜通过基于图像的视觉伺服自动调整视场来解决这一限制,确保平滑和稳定的可视化。然而,大多数机器人实现在外部串行臂上安装刚性直杆内窥镜,限制了灵活性并使远程运动中心(RCM)控制复杂化。此外,许多自动化的FOV方法跟踪手术工具提示,而不代表外科医生的意图。因此,这项工作提出了一个紧凑的6自由度平行内窥镜机构,提高了灵活性和灵活性,同时简化了RCM约束的满足,以及一个眼睛辅助的多工具跟踪控制器,根据外科医生的注意力动态加权工具。在不同场景下的模拟和实验表明,视场稳定在2秒内,平均图像空间跟踪误差为20像素,眼手误差为3°,并且至少减少了30%的不必要的视场调整。补充视频是可用的。
{"title":"Enhanced flexibility and dexterity in robotic endoscopy via a 6-DOF parallel mechanism and eye-gaze-assisted field-of-view control","authors":"Mengtang Li ,&nbsp;Shen Zhao ,&nbsp;Shuai Wang ,&nbsp;Fanmao Liu","doi":"10.1016/j.robot.2025.105322","DOIUrl":"10.1016/j.robot.2025.105322","url":null,"abstract":"<div><div>In conventional minimally invasive surgery, an assistant manually steers the endoscope based on the surgeon’s verbal commands, but fatigue and tremor can degrade field-of-view (FOV) stability and efficiency. Robotic endoscopes address this limitation through automated FOV adjustment via image-based visual servoing, ensuring smooth and stable visualization. However, most robotic implementations mount rigid straight-rod endoscopes on external serial arms, limiting dexterity and complicating remote-center-of-motion (RCM) control. Moreover, many automated FOV methods track surgical-tool tips without representing the surgeon’s intention. This work therefore presents a compact 6-DOF parallel endoscopic mechanism that improves flexibility and dexterity while simplifying RCM constraint satisfaction, together with an eye-gaze-assisted multi-tool tracking controller that dynamically weights tools according to surgeon attention. Simulations and experiments across diverse scenarios demonstrate FOV stabilization within 2 s, mean image-space tracking error <span><math><mo>&lt;</mo></math></span> 20 pixels, eye hand error <span><math><mo>&lt;</mo></math></span> 3°, and at least a 30% reduction in unnecessary FOV adjustments. Supplementary video is available.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105322"},"PeriodicalIF":5.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reference-guided image inpainting via progressive feature interaction and reconstruction for mobile robots with binocular cameras 基于渐进式特征交互与重构的双目移动机器人参考引导图像绘制
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-26 DOI: 10.1016/j.robot.2025.105320
Jingyi Liu , Hengyu Li , Hang Liu , Shaorong Xie , Jun Luo
Image inpainting is a critical technique for recovering missing information caused by camera soiling on mobile robots. However, most existing learning-based methods still struggle to handle damaged images with complex semantic environments and diverse hole patterns, primarily because of the insufficient acquisition and inadequate fusion of scene-consistent prior cues for damaged images. To address this limitation, we propose a novel reference-guided image inpainting network (RGI2N) for mobile robots equipped with binocular cameras, which employs adjacent camera images as inpainting guidance and fuses its prior information via progressive feature interaction to reconstruct damaged regions. Specifically, a back-projection-based feature interaction module (FIM) is proposed to align the features of the reference and damaged images, thereby capturing the contextual information of the reference image for inpainting. Additionally, a content reconstruction module (CRM) based on residual learning and channel attention is presented to selectively aggregate interactive features for reconstructing missing details. Building upon these two modules, we further devise a progressive feature interaction and reconstruction module (PFIRM) that organizes multiple FIM-CRM pairs into a stepwise structure, enabling the progressive fusion of multiscale contextual information derived from both the damaged and reference images. Moreover, a feature refinement module (FRM) is developed to interact with low-level fine-grained features and refine the reconstructed details. Extensive evaluations conducted on the public ETHZ dataset and our self-built MII dataset demonstrate that RGI2N outperforms other state-of-the-art approaches and produces high-quality inpainting results on real soiled data.
图像补漆是修复移动机器人因相机污染而导致的信息缺失的关键技术。然而,大多数基于学习的方法仍然难以处理具有复杂语义环境和多种孔洞模式的受损图像,主要原因是对受损图像的场景一致性先验线索的获取和融合不足。为了解决这一限制,我们提出了一种新的参考引导图像修复网络(RGI2N),用于配备双目摄像机的移动机器人,该网络采用相邻摄像机图像作为修复引导,并通过渐进特征交互融合其先验信息来重建受损区域。具体而言,提出了一种基于反投影的特征交互模块(FIM),将参考图像和受损图像的特征对齐,从而捕获参考图像的上下文信息进行修复。此外,提出了一个基于残差学习和通道关注的内容重构模块(CRM),选择性地聚合交互特征以重建缺失的细节。在这两个模块的基础上,我们进一步设计了一个渐进式特征交互和重建模块(PFIRM),该模块将多个FIM-CRM对组织成一个逐步结构,从而实现来自损坏图像和参考图像的多尺度上下文信息的渐进式融合。此外,开发了特征细化模块(FRM),与底层细粒度特征交互,对重构细节进行细化。对公共ETHZ数据集和我们自建的MII数据集进行的广泛评估表明,RGI2N优于其他最先进的方法,并在实际污染数据上产生高质量的喷漆结果。
{"title":"Reference-guided image inpainting via progressive feature interaction and reconstruction for mobile robots with binocular cameras","authors":"Jingyi Liu ,&nbsp;Hengyu Li ,&nbsp;Hang Liu ,&nbsp;Shaorong Xie ,&nbsp;Jun Luo","doi":"10.1016/j.robot.2025.105320","DOIUrl":"10.1016/j.robot.2025.105320","url":null,"abstract":"<div><div>Image inpainting is a critical technique for recovering missing information caused by camera soiling on mobile robots. However, most existing learning-based methods still struggle to handle damaged images with complex semantic environments and diverse hole patterns, primarily because of the insufficient acquisition and inadequate fusion of scene-consistent prior cues for damaged images. To address this limitation, we propose a novel reference-guided image inpainting network (<span><math><mrow><msup><mrow><mi>RGI</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>N</mi></mrow></math></span>) for mobile robots equipped with binocular cameras, which employs adjacent camera images as inpainting guidance and fuses its prior information via progressive feature interaction to reconstruct damaged regions. Specifically, a back-projection-based feature interaction module (FIM) is proposed to align the features of the reference and damaged images, thereby capturing the contextual information of the reference image for inpainting. Additionally, a content reconstruction module (CRM) based on residual learning and channel attention is presented to selectively aggregate interactive features for reconstructing missing details. Building upon these two modules, we further devise a progressive feature interaction and reconstruction module (PFIRM) that organizes multiple FIM-CRM pairs into a stepwise structure, enabling the progressive fusion of multiscale contextual information derived from both the damaged and reference images. Moreover, a feature refinement module (FRM) is developed to interact with low-level fine-grained features and refine the reconstructed details. Extensive evaluations conducted on the public ETHZ dataset and our self-built MII dataset demonstrate that <span><math><mrow><msup><mrow><mi>RGI</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>N</mi></mrow></math></span> outperforms other state-of-the-art approaches and produces high-quality inpainting results on real soiled data.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105320"},"PeriodicalIF":5.2,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAD-TAMP: Human adaptive task and motion planning for human–robot collaboration in industrial scenario HAD-TAMP:工业场景人机协作的人类自适应任务和运动规划
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-26 DOI: 10.1016/j.robot.2025.105318
Alberto Gottardi , Matteo Terreran , Enrico Pagello , Emanuele Menegatti
Task and Motion Planning (TAMP) is essential for efficient Human–Robot Collaboration (HRC) in industrial settings, yet existing approaches struggle to handle human interventions and dynamic environments. This paper presents a Human Adaptive Task and Motion Planning (HAD-TAMP) framework that seamlessly integrates human pose and actions into the planning process to quickly adapt to human requests or deviations from the process plan. The framework consists of three key modules: a task planning module, which generates and updates task sequences based on real-time human input, a motion planning module composed of a set of motion planners specialized for different phases of the collaboration (e.g., collaborative transportation of materials), and a context reasoner module which coordinates the overall process based on the sensory information available. A key contribution is using a receding horizon strategy, enabling real-time adaptation to human inputs and environmental changes. The approach is validated in a real industrial HRC scenario through two applications: gesture-based human–robot interaction and close human–robot collaboration in carbon fiber draping. Experimental results demonstrate the framework’s effectiveness in ensuring adaptability to multiple human requests and efficiency: the re-planning time is 4 times and 5 times faster than the generation of a new plan.
任务和运动规划(TAMP)对于工业环境中高效的人机协作(HRC)至关重要,然而现有的方法难以处理人为干预和动态环境。本文提出了一个人类自适应任务和运动规划(had - stamp)框架,该框架将人类姿态和动作无缝集成到规划过程中,以快速适应人类的要求或偏离过程计划。该框架由三个关键模块组成:一个任务规划模块,根据实时人工输入生成和更新任务序列;一个运动规划模块,由一组运动规划器组成,专门用于不同阶段的协作(例如,协同运输材料);一个上下文推理模块,根据可用的感官信息协调整个过程。一项关键贡献是采用了后退地平线战略,使其能够实时适应人类投入和环境变化。该方法通过基于手势的人机交互和碳纤维悬垂中的人机密切协作两种应用,在真实的工业HRC场景中得到了验证。实验结果表明,该框架在确保对多种人类需求的适应性和效率方面是有效的:重新规划的时间比生成新计划的时间快4倍和5倍。
{"title":"HAD-TAMP: Human adaptive task and motion planning for human–robot collaboration in industrial scenario","authors":"Alberto Gottardi ,&nbsp;Matteo Terreran ,&nbsp;Enrico Pagello ,&nbsp;Emanuele Menegatti","doi":"10.1016/j.robot.2025.105318","DOIUrl":"10.1016/j.robot.2025.105318","url":null,"abstract":"<div><div>Task and Motion Planning (TAMP) is essential for efficient Human–Robot Collaboration (HRC) in industrial settings, yet existing approaches struggle to handle human interventions and dynamic environments. This paper presents a Human Adaptive Task and Motion Planning (HAD-TAMP) framework that seamlessly integrates human pose and actions into the planning process to quickly adapt to human requests or deviations from the process plan. The framework consists of three key modules: a task planning module, which generates and updates task sequences based on real-time human input, a motion planning module composed of a set of motion planners specialized for different phases of the collaboration (e.g., collaborative transportation of materials), and a context reasoner module which coordinates the overall process based on the sensory information available. A key contribution is using a receding horizon strategy, enabling real-time adaptation to human inputs and environmental changes. The approach is validated in a real industrial HRC scenario through two applications: gesture-based human–robot interaction and close human–robot collaboration in carbon fiber draping. Experimental results demonstrate the framework’s effectiveness in ensuring adaptability to multiple human requests and efficiency: the re-planning time is 4 times and 5 times faster than the generation of a new plan.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105318"},"PeriodicalIF":5.2,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on magnetically driven continuum robots for biomedical application 磁驱动连续体生物医学机器人研究进展
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-24 DOI: 10.1016/j.robot.2025.105316
Pan Li , Yifei Chen , Xinghua Lin , Chongcong Ye , Junxia Zhang , Delei Fang , Cunman Liang
This paper systematically reviews the research advancements, core technical characteristics, and application challenges of magnetic continuum robots. As a key direction in the field of soft robotics, magnetic continuum robots leverage the non-contact manipulation characteristics of magnetic field actuation to demonstrate unique advantages in narrow-space operations (such as intravascular guidewire navigation), high-precision manipulation (such as minimally invasive surgery), and dynamic environmental adaptation. Their application scenarios have expanded from single intravascular interventions to complex task execution across the entire body. The paper highlights that the development of various magnetic actuators (such as gradient magnetic field generators and rotating magnetic field devices) has accelerated improvements in the robots’ motion flexibility and task adaptability. However, significant technical bottlenecks remain: insufficient environmental adaptability of control algorithms leading to trajectory deviations, instability caused by the complexity of system dynamics modeling, contradictions between the spatial uniformity and penetration depth of the actuation magnetic field, and the challenge of balancing biocompatibility, mechanical durability, and magnetic response efficiency in flexible polymer materials—all of which limit their clinical application. Finally, from the perspective of technological integration and breakthroughs, this paper discusses future development directions: deep integration of artificial intelligence and control algorithms, application of biocompatible materials and 3D printing technologies, optimization of magnetic actuation platforms, enhancement of multimodal collaborative operation capabilities, and expansion into interdisciplinary fields such as environmental monitoring and elderly care services. This review provides a systematic research framework and conceptual references for the technological evolution and practical application of magnetic continuum robots.
本文系统地综述了磁性连续体机器人的研究进展、核心技术特点和应用挑战。磁连续体机器人作为软机器人领域的一个重要方向,利用磁场驱动的非接触操作特性,在狭窄空间操作(如血管内导丝导航)、高精度操作(如微创手术)和动态环境适应等方面展现出独特的优势。它们的应用场景已经从单一的血管内干预扩展到整个身体的复杂任务执行。本文强调了各种磁致动器(如梯度磁场发生器和旋转磁场装置)的发展加速了机器人运动灵活性和任务适应性的提高。然而,仍然存在重大的技术瓶颈:控制算法的环境适应性不足导致轨迹偏差,系统动力学建模复杂性导致的不稳定性,驱动磁场的空间均匀性与穿透深度之间的矛盾,以及柔性高分子材料中生物相容性、机械耐久性和磁响应效率的平衡挑战,这些都限制了它们的临床应用。最后,从技术整合与突破的角度,探讨了未来的发展方向:人工智能与控制算法的深度融合,生物相容性材料与3D打印技术的应用,磁致动平台的优化,多模式协同操作能力的增强,向环境监测、养老服务等跨学科领域拓展。本文为磁连续体机器人的技术发展和实际应用提供了系统的研究框架和概念参考。
{"title":"A survey on magnetically driven continuum robots for biomedical application","authors":"Pan Li ,&nbsp;Yifei Chen ,&nbsp;Xinghua Lin ,&nbsp;Chongcong Ye ,&nbsp;Junxia Zhang ,&nbsp;Delei Fang ,&nbsp;Cunman Liang","doi":"10.1016/j.robot.2025.105316","DOIUrl":"10.1016/j.robot.2025.105316","url":null,"abstract":"<div><div>This paper systematically reviews the research advancements, core technical characteristics, and application challenges of magnetic continuum robots. As a key direction in the field of soft robotics, magnetic continuum robots leverage the non-contact manipulation characteristics of magnetic field actuation to demonstrate unique advantages in narrow-space operations (such as intravascular guidewire navigation), high-precision manipulation (such as minimally invasive surgery), and dynamic environmental adaptation. Their application scenarios have expanded from single intravascular interventions to complex task execution across the entire body. The paper highlights that the development of various magnetic actuators (such as gradient magnetic field generators and rotating magnetic field devices) has accelerated improvements in the robots’ motion flexibility and task adaptability. However, significant technical bottlenecks remain: insufficient environmental adaptability of control algorithms leading to trajectory deviations, instability caused by the complexity of system dynamics modeling, contradictions between the spatial uniformity and penetration depth of the actuation magnetic field, and the challenge of balancing biocompatibility, mechanical durability, and magnetic response efficiency in flexible polymer materials—all of which limit their clinical application. Finally, from the perspective of technological integration and breakthroughs, this paper discusses future development directions: deep integration of artificial intelligence and control algorithms, application of biocompatible materials and 3D printing technologies, optimization of magnetic actuation platforms, enhancement of multimodal collaborative operation capabilities, and expansion into interdisciplinary fields such as environmental monitoring and elderly care services. This review provides a systematic research framework and conceptual references for the technological evolution and practical application of magnetic continuum robots.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105316"},"PeriodicalIF":5.2,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot policy learning from demonstrations and visual rewards for sequential manipulation tasks 从顺序操作任务的演示和视觉奖励中学习机器人策略
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-23 DOI: 10.1016/j.robot.2025.105311
Abdalkarim Mohtasib , Heriberto Cuayáhuitl
Neural-based reinforcement learning is a promising approach for teaching robots new behaviours. But one of its main limitations is the need for carefully hand-coded reward signals by an expert. It is thus crucial to automate the reward learning process so that new skills can be taught to robots by their users. This article proposes an approach for enabling robots to learn reward signals for sequential tasks from visual observations, eliminating the need for expert-designed reward signals. It involves dividing the sequential task into smaller sub-tasks using a novel auto-labelling technique to generate rewards for demonstration data. A novel image classifier is proposed to estimate the visual rewards for each task accurately. The effectiveness of the proposed approach is demonstrated in generating informative reward signals through comprehensive evaluations on three challenging sequential tasks: block stacking, door opening, and nuts assembly. By using the learnt reward signals to train reinforcement learning agents from demonstration, we are able to induce policies that outperform those trained with sparse oracle rewards. Since our approach consistently outperformed several baselines including DDPG, TD3, SAC, DAPG, GAIL, and AWAC, it represents an advancement in the application of model-free reinforcement learning to sequential robotic tasks.
基于神经的强化学习是一种很有前途的方法来教授机器人新的行为。但它的一个主要限制是需要由专家精心手工编码的奖励信号。因此,将奖励学习过程自动化是至关重要的,这样用户就可以向机器人传授新技能。本文提出了一种方法,使机器人能够从视觉观察中学习顺序任务的奖励信号,从而消除了对专家设计的奖励信号的需要。它包括使用一种新颖的自动标记技术将顺序任务划分为更小的子任务,以生成演示数据的奖励。提出了一种新的图像分类器来准确估计每个任务的视觉奖励。通过对三个具有挑战性的顺序任务:积木堆叠、门打开和螺母装配进行综合评估,证明了该方法的有效性,并生成了信息丰富的奖励信号。通过使用学习到的奖励信号来训练来自演示的强化学习代理,我们能够诱导出优于使用稀疏oracle奖励训练的策略。由于我们的方法始终优于包括DDPG, TD3, SAC, DAPG, GAIL和AWAC在内的几个基线,因此它代表了无模型强化学习应用于顺序机器人任务的进步。
{"title":"Robot policy learning from demonstrations and visual rewards for sequential manipulation tasks","authors":"Abdalkarim Mohtasib ,&nbsp;Heriberto Cuayáhuitl","doi":"10.1016/j.robot.2025.105311","DOIUrl":"10.1016/j.robot.2025.105311","url":null,"abstract":"<div><div>Neural-based reinforcement learning is a promising approach for teaching robots new behaviours. But one of its main limitations is the need for carefully hand-coded reward signals by an expert. It is thus crucial to automate the reward learning process so that new skills can be taught to robots by their users. This article proposes an approach for enabling robots to learn reward signals for sequential tasks from visual observations, eliminating the need for expert-designed reward signals. It involves dividing the sequential task into smaller sub-tasks using a novel auto-labelling technique to generate rewards for demonstration data. A novel image classifier is proposed to estimate the visual rewards for each task accurately. The effectiveness of the proposed approach is demonstrated in generating informative reward signals through comprehensive evaluations on three challenging sequential tasks: block stacking, door opening, and nuts assembly. By using the learnt reward signals to train reinforcement learning agents from demonstration, we are able to induce policies that outperform those trained with sparse oracle rewards. Since our approach consistently outperformed several baselines including DDPG, TD3, SAC, DAPG, GAIL, and AWAC, it represents an advancement in the application of model-free reinforcement learning to sequential robotic tasks.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105311"},"PeriodicalIF":5.2,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An adaptive bidirectional optimal rapidly exploring random tree algorithm with dynamic adjustment and extended guidance strategy 一种具有动态调整和扩展制导策略的自适应双向快速寻优随机树算法
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-23 DOI: 10.1016/j.robot.2025.105314
Dan Chen , Kaiwen Luo , Zichen Wang , Lintao Fan , Changqing Wang
In order to solve the problems of strong expansion randomness, slow convergence speed, and poor environmental adaptability in the optimal Rapidly-exploring Random Tree (RRT*) algorithm, this paper proposes an adaptive bidirectional RRT*(AB-RRT*) algorithm. The algorithm first initializes the target bias probability and expansion step size in random sampling using an evaluation function based on factors such as map size, number of obstacles, and starting point distance. It adaptively adjusts the target bias probability and expansion step size based on the number of collision detections, reducing the randomness of the sampling process and effectively reducing the number of redundant points generated. Secondly, collision zone shielding and extended node guidance strategies were proposed to guide random trees quickly through narrow and complex environments, improving the convergence speed of the algorithm. Finally, redundant points are removed and smoothed from the initial path to obtain a better global path. We conducted comparative experiments between the proposed algorithm and RRT*, Q-RRT*, GB-RRT, RRT-Connect, RRT*-Connect, GBB-RRT*, and Informed-Bi-RRT* in four different complexity scenarios. The results showed that compared with the best performing Informed-Bi-RRT* algorithm, the AB-RRT* algorithm shortened the path generation time by 17.2% to 71.54% in different scenarios, reduced the path length by 0.44% to 2.25%, and achieved a planning success rate of 100%. This proves the superiority of AB-RRT* algorithm in planning efficiency and path quality, as well as its excellent adaptability and robustness in different environments.
针对最优快速探索随机树(RRT*)算法存在的扩展随机性强、收敛速度慢、环境适应性差等问题,本文提出了一种自适应双向RRT*(AB-RRT*)算法。该算法首先利用基于地图大小、障碍物数量和起点距离等因素的评价函数初始化随机抽样中的目标偏差概率和扩展步长。该算法根据碰撞检测次数自适应调整目标偏置概率和展开步长,降低了采样过程的随机性,有效减少了生成的冗余点数量。其次,提出碰撞区屏蔽和扩展节点引导策略,引导随机树快速通过狭窄复杂的环境,提高了算法的收敛速度;最后,从初始路径中去除冗余点并进行平滑处理,得到更好的全局路径。在四种不同的复杂度场景下,将本文算法与RRT*、Q-RRT*、GB-RRT、RRT-Connect、RRT*-Connect、GBB-RRT*、Informed-Bi-RRT*进行了对比实验。结果表明,与表现最佳的inform - bi - rrt *算法相比,AB-RRT*算法在不同场景下的路径生成时间缩短了17.2%至71.54%,路径长度缩短了0.44%至2.25%,规划成功率为100%。这证明了AB-RRT*算法在规划效率和路径质量上的优越性,以及在不同环境下良好的适应性和鲁棒性。
{"title":"An adaptive bidirectional optimal rapidly exploring random tree algorithm with dynamic adjustment and extended guidance strategy","authors":"Dan Chen ,&nbsp;Kaiwen Luo ,&nbsp;Zichen Wang ,&nbsp;Lintao Fan ,&nbsp;Changqing Wang","doi":"10.1016/j.robot.2025.105314","DOIUrl":"10.1016/j.robot.2025.105314","url":null,"abstract":"<div><div>In order to solve the problems of strong expansion randomness, slow convergence speed, and poor environmental adaptability in the optimal Rapidly-exploring Random Tree (RRT*) algorithm, this paper proposes an adaptive bidirectional RRT*(AB-RRT*) algorithm. The algorithm first initializes the target bias probability and expansion step size in random sampling using an evaluation function based on factors such as map size, number of obstacles, and starting point distance. It adaptively adjusts the target bias probability and expansion step size based on the number of collision detections, reducing the randomness of the sampling process and effectively reducing the number of redundant points generated. Secondly, collision zone shielding and extended node guidance strategies were proposed to guide random trees quickly through narrow and complex environments, improving the convergence speed of the algorithm. Finally, redundant points are removed and smoothed from the initial path to obtain a better global path. We conducted comparative experiments between the proposed algorithm and RRT*, Q-RRT*, GB-RRT, RRT-Connect, RRT*-Connect, GBB-RRT*, and Informed-Bi-RRT* in four different complexity scenarios. The results showed that compared with the best performing Informed-Bi-RRT* algorithm, the AB-RRT* algorithm shortened the path generation time by 17.2% to 71.54% in different scenarios, reduced the path length by 0.44% to 2.25%, and achieved a planning success rate of 100%. This proves the superiority of AB-RRT* algorithm in planning efficiency and path quality, as well as its excellent adaptability and robustness in different environments.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105314"},"PeriodicalIF":5.2,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visually extracting the network topology of drone swarms 可视化地提取无人机群的网络拓扑结构
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-22 DOI: 10.1016/j.robot.2025.105313
Nisha Kumari, Kevin Lee, Chathu Ranaweera
Drone swarms operate as decentralized systems where multiple autonomous nodes coordinate their actions through inter-drone communication. A network is a collection of interconnected nodes that communicate to share resources, with its topology representing the physical or logical arrangement of these nodes. For drone swarms, network topology plays a key role in enabling coordinated actions through effective communication links. Understanding the behavior of drone swarms requires analyzing their network topology, as it provides valuable insights into the links and nodes that define their communication patterns. The research in this paper presents a computer vision-based approach to extract and analyze the network topology of such swarms, focusing on the logical communication links rather than physical formations. Using 3D coordinates obtained via stereo vision, the method identifies communication patterns corresponding to star, ring and mesh topologies. The experimental results demonstrate that the proposed method can accurately distinguish between different communication patterns within the swarm, allowing for effective mapping of the network structure. This analysis provides practical insights into how swarm coordination emerges from communication topology and offers a foundation for optimizing swarm behavior in real-world applications.
无人机群作为分散系统运行,其中多个自主节点通过无人机间通信协调其行动。网络是通过通信共享资源的互联节点的集合,其拓扑结构表示这些节点的物理或逻辑排列。对于无人机群来说,网络拓扑在通过有效的通信链路实现协调行动方面起着关键作用。了解无人机群的行为需要分析它们的网络拓扑,因为它提供了对定义其通信模式的链接和节点的有价值的见解。本文的研究提出了一种基于计算机视觉的方法来提取和分析这种群体的网络拓扑结构,重点关注逻辑通信链路而不是物理结构。该方法利用立体视觉获得的三维坐标,识别星、环和网格拓扑对应的通信模式。实验结果表明,该方法可以准确区分群内不同的通信模式,从而有效地映射网络结构。这种分析提供了关于群体协调如何从通信拓扑中产生的实际见解,并为优化现实应用中的群体行为提供了基础。
{"title":"Visually extracting the network topology of drone swarms","authors":"Nisha Kumari,&nbsp;Kevin Lee,&nbsp;Chathu Ranaweera","doi":"10.1016/j.robot.2025.105313","DOIUrl":"10.1016/j.robot.2025.105313","url":null,"abstract":"<div><div>Drone swarms operate as decentralized systems where multiple autonomous nodes coordinate their actions through inter-drone communication. A network is a collection of interconnected nodes that communicate to share resources, with its topology representing the physical or logical arrangement of these nodes. For drone swarms, network topology plays a key role in enabling coordinated actions through effective communication links. Understanding the behavior of drone swarms requires analyzing their network topology, as it provides valuable insights into the links and nodes that define their communication patterns. The research in this paper presents a computer vision-based approach to extract and analyze the network topology of such swarms, focusing on the logical communication links rather than physical formations. Using 3D coordinates obtained via stereo vision, the method identifies communication patterns corresponding to star, ring and mesh topologies. The experimental results demonstrate that the proposed method can accurately distinguish between different communication patterns within the swarm, allowing for effective mapping of the network structure. This analysis provides practical insights into how swarm coordination emerges from communication topology and offers a foundation for optimizing swarm behavior in real-world applications.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105313"},"PeriodicalIF":5.2,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Terrain-based place recognition for LiDAR SLAM of quadruped robots with limited field-of-view measurements 有限视场测量条件下四足机器人激光雷达SLAM的地形位置识别
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-22 DOI: 10.1016/j.robot.2025.105315
Roun Lee , Seonghun Hong , Sukmin Yoon
Over the past few decades, light detection and ranging (LiDAR) sensors have been extensively employed for pose estimation in simultaneous localization and mapping (SLAM). In more recent years, the use of solid-state LiDAR sensors with no rotating mechanisms and a limited field-of-view for SLAM has attracted research attention because of their cost effectiveness and durability. However, it is highly challenging to successfully perform place recognition, which is one of the most important components of SLAM, via limited field-of-view measurements. Failure in place recognition can severely degrade the resulting estimation performance of SLAM algorithms. Considering a terrestrial SLAM framework for quadruped robots with limited field-of-view LiDAR sensors, this study proposes a terrain-based place recognition algorithm that reconstructs and compares detected feature terrains, using a set of foot contact information for quadruped robots. The validity and practical feasibility of the proposed approach are demonstrated through experimental results using a quadruped robot system with a limited field-of-view LiDAR sensor.
在过去的几十年里,光探测和测距(LiDAR)传感器被广泛应用于同步定位和测绘(SLAM)中的姿态估计。近年来,使用无旋转机构和有限视场的固态激光雷达传感器进行SLAM由于其成本效益和耐用性而引起了研究的关注。然而,通过有限的视场测量,成功地进行位置识别是SLAM最重要的组成部分之一,这是非常具有挑战性的。位置识别失败会严重降低SLAM算法的估计性能。考虑到具有有限视场LiDAR传感器的四足机器人的地面SLAM框架,本研究提出了一种基于地形的位置识别算法,该算法使用一组四足机器人的足部接触信息来重建和比较检测到的特征地形。实验结果表明,采用有限视场激光雷达传感器的四足机器人系统验证了该方法的有效性和实际可行性。
{"title":"Terrain-based place recognition for LiDAR SLAM of quadruped robots with limited field-of-view measurements","authors":"Roun Lee ,&nbsp;Seonghun Hong ,&nbsp;Sukmin Yoon","doi":"10.1016/j.robot.2025.105315","DOIUrl":"10.1016/j.robot.2025.105315","url":null,"abstract":"<div><div>Over the past few decades, light detection and ranging (LiDAR) sensors have been extensively employed for pose estimation in simultaneous localization and mapping (SLAM). In more recent years, the use of solid-state LiDAR sensors with no rotating mechanisms and a limited field-of-view for SLAM has attracted research attention because of their cost effectiveness and durability. However, it is highly challenging to successfully perform place recognition, which is one of the most important components of SLAM, via limited field-of-view measurements. Failure in place recognition can severely degrade the resulting estimation performance of SLAM algorithms. Considering a terrestrial SLAM framework for quadruped robots with limited field-of-view LiDAR sensors, this study proposes a terrain-based place recognition algorithm that reconstructs and compares detected feature terrains, using a set of foot contact information for quadruped robots. The validity and practical feasibility of the proposed approach are demonstrated through experimental results using a quadruped robot system with a limited field-of-view LiDAR sensor.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105315"},"PeriodicalIF":5.2,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145939734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An expansion and evolution framework of frame transformation for complex robotic systems 复杂机器人系统框架变换的扩展与演化框架
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-21 DOI: 10.1016/j.robot.2025.105312
Si-Hao Zuo , Yan-Jiang Zhao , Yong-De Zhang , Le Ye
Frame transformation is the basis in the robotic system, and the topology structural linkage between the frames is the foundation of the frame transformation. However, the frame transformation process for a complex robotic system is rather challenging and complicated because there may exist a lot of frames to be defined and linked in different technical scenarios, which are inexplicit and individual. This severely hinders the exchanges between the engineers and hampers the development of robotic technologies as well. This paper proposes a novel framework to explicitly describe the linkages between the frames, and automatically realize a complete frame transformation for the robotic system. The framework involves three layers: the semantic description layer, the frame topology relationship layer, and the mathematical calculation layer, and it achieves a unity of the three layers. We define an element module to semantically describe each object with a fixed frame, design a relative position transformation chain (RPTC) to explicitly describe the topology structural linkages between the frames. Then, an expansionary and evolutionary strategy is proposed to obtain a final result of the RPTC for the robotic system by the expansions and evolutions of the short RPTCs. Finally, a software platform is developed for the realization of the framework, and a case of a classical medical robotic system is performed as an example. And the results of the expansion and the evolution degrees prove the validity of the proposed method.
机架变换是机器人系统的基础,机架之间的拓扑结构联动是机架变换的基础。然而,对于复杂的机器人系统,由于在不同的技术场景中可能存在大量的框架需要定义和链接,这些框架是不明确的和单独的,因此框架转换过程是相当具有挑战性和复杂性的。这严重阻碍了工程师之间的交流,也阻碍了机器人技术的发展。本文提出了一种新的框架来明确描述各坐标系之间的联系,并自动实现机器人系统的完整坐标系转换。该框架涉及语义描述层、框架拓扑关系层和数学计算层三层,实现了三层的统一。我们定义了一个元素模块,用一个固定的框架对每个对象进行语义描述,设计了一个相对位置转换链(RPTC)来显式描述框架之间的拓扑结构联系。然后,提出了一种扩展进化策略,通过对短RPTC的扩展和进化来获得机器人系统的最终RPTC结果。最后,开发了实现该框架的软件平台,并以某经典医疗机器人系统为例进行了算例分析。扩展和进化度的结果证明了所提方法的有效性。
{"title":"An expansion and evolution framework of frame transformation for complex robotic systems","authors":"Si-Hao Zuo ,&nbsp;Yan-Jiang Zhao ,&nbsp;Yong-De Zhang ,&nbsp;Le Ye","doi":"10.1016/j.robot.2025.105312","DOIUrl":"10.1016/j.robot.2025.105312","url":null,"abstract":"<div><div>Frame transformation is the basis in the robotic system, and the topology structural linkage between the frames is the foundation of the frame transformation. However, the frame transformation process for a complex robotic system is rather challenging and complicated because there may exist a lot of frames to be defined and linked in different technical scenarios, which are inexplicit and individual. This severely hinders the exchanges between the engineers and hampers the development of robotic technologies as well. This paper proposes a novel framework to explicitly describe the linkages between the frames, and automatically realize a complete frame transformation for the robotic system. The framework involves three layers: the semantic description layer, the frame topology relationship layer, and the mathematical calculation layer, and it achieves a unity of the three layers. We define an element module to semantically describe each object with a fixed frame, design a relative position transformation chain (RPTC) to explicitly describe the topology structural linkages between the frames. Then, an expansionary and evolutionary strategy is proposed to obtain a final result of the RPTC for the robotic system by the expansions and evolutions of the short RPTCs. Finally, a software platform is developed for the realization of the framework, and a case of a classical medical robotic system is performed as an example. And the results of the expansion and the evolution degrees prove the validity of the proposed method.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105312"},"PeriodicalIF":5.2,"publicationDate":"2025-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Autonomous Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1