首页 > 最新文献

Robotics and Autonomous Systems最新文献

英文 中文
Automatic measurement process for hand–eye calibration based on Archimedean solids pose distribution 基于阿基米德固体位姿分布的手眼标定自动测量过程
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-01-05 DOI: 10.1016/j.robot.2026.105333
Kaifan Zhong, Nianfeng Wang, Xianmin Zhang
Hand–eye calibration is a vital process for determining an unknown transformation between sensors and a robot end frame before applying robot vision. In addition to optimizing the mathematical solution, refining the pose distribution involved in the calibration can improve the calibration accuracy and efficiency. To optimize the pose distribution, the 3-D position distribution of the tool centre point is designed first, and then the final poses are determined considering the current application scenario. In this paper, an automatic pose generation method is proposed to stably output suitable poses in on-site calibration scenes when an arbitrary 3-D position distribution of the tool centre point is input. Based on this, different pose distributions are discussed regarding their effect on the calibration error, and an indicator is presented to evaluate the performance of these distributions before executing a calibration process. Moreover, a special pose distribution formed by an Archimedean solid is presented, and it shows better performance in improving the hand–eye calibration accuracy and efficiency. Both simulation and on-site experiments are carried out to verify the proposed methods and analyse the effect of different distributions on the calibration results.
在应用机器人视觉之前,手眼标定是确定传感器与机器人端架之间未知变换的关键过程。除了优化数学解外,对标定过程中涉及的位姿分布进行细化,可以提高标定精度和效率。为了优化位姿分布,首先设计了刀具中心点的三维位置分布,然后结合当前应用场景确定了最终位姿。本文提出了一种姿态自动生成方法,当输入任意刀具中心点的三维位置分布时,可在现场标定场景中稳定输出合适的姿态。在此基础上,讨论了不同位姿分布对校准误差的影响,并在执行校准过程之前提出了一个指标来评估这些分布的性能。提出了一种由阿基米德实体构成的特殊位姿分布,在提高手眼标定精度和效率方面表现出较好的效果。通过仿真和现场实验验证了所提出的方法,并分析了不同分布对标定结果的影响。
{"title":"Automatic measurement process for hand–eye calibration based on Archimedean solids pose distribution","authors":"Kaifan Zhong,&nbsp;Nianfeng Wang,&nbsp;Xianmin Zhang","doi":"10.1016/j.robot.2026.105333","DOIUrl":"10.1016/j.robot.2026.105333","url":null,"abstract":"<div><div>Hand–eye calibration is a vital process for determining an unknown transformation between sensors and a robot end frame before applying robot vision. In addition to optimizing the mathematical solution, refining the pose distribution involved in the calibration can improve the calibration accuracy and efficiency. To optimize the pose distribution, the 3-D position distribution of the tool centre point is designed first, and then the final poses are determined considering the current application scenario. In this paper, an automatic pose generation method is proposed to stably output suitable poses in on-site calibration scenes when an arbitrary 3-D position distribution of the tool centre point is input. Based on this, different pose distributions are discussed regarding their effect on the calibration error, and an indicator is presented to evaluate the performance of these distributions before executing a calibration process. Moreover, a special pose distribution formed by an Archimedean solid is presented, and it shows better performance in improving the hand–eye calibration accuracy and efficiency. Both simulation and on-site experiments are carried out to verify the proposed methods and analyse the effect of different distributions on the calibration results.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105333"},"PeriodicalIF":5.2,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human–robot collaborative control method based on command-weighted fusion strategy for manned legged robot 基于命令加权融合策略的人-机器人协同控制方法
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-01-04 DOI: 10.1016/j.robot.2025.105323
Yaojin Fan , Bo You , Jiayu Li , Yufei Liu , Chen Chen , Xiaolei Chen , Liang Ding
This paper proposes a human–robot collaborative control method based on command-weighted fusion strategy for manned legged robot, addressing challenges posed by the complex structure of manned legged robots. These challenges affect both the safety of autonomous decision-making algorithms and the complexity of manual control. First, we design an autonomous command optimization method integrating terrain information and cost functions to enhance decision-making in complex terrains. Subsequently, a method for optimizing driving weighting factors is designed, utilizing a prior mechanism and rule knowledge base, while considering the influence of driver reliability and terrain complexity on driving safety and stability. Through analysis of human-machine driving intentions and the autonomous driving weighting factor, a commands weighted fusion strategy for human-machine commands is devised to achieve rational dynamic allocation of driving weighting and command fusion. Finally, validation through a human–robot collaborative control experiment demonstrates that the proposed control strategy effectively leverages the strengths of both human drivers and intelligent systems, yielding satisfactory control performance.
针对人腿机器人结构复杂的问题,提出了一种基于命令加权融合策略的人-机器人协同控制方法。这些挑战既影响了自主决策算法的安全性,也影响了人工控制的复杂性。首先,设计了一种集成地形信息和成本函数的自主指挥优化方法,以增强复杂地形下的决策能力。随后,在考虑驾驶员可靠性和地形复杂性对驾驶安全稳定性影响的基础上,利用先验机制和规则知识库,设计了一种优化驾驶权重因子的方法。通过对人机驾驶意图和自动驾驶权重因子的分析,设计了人机命令加权融合策略,实现了驾驶权重的合理动态分配和命令融合。最后,通过人机协同控制实验验证,该控制策略有效地利用了人类驾驶员和智能系统的优势,取得了令人满意的控制性能。
{"title":"Human–robot collaborative control method based on command-weighted fusion strategy for manned legged robot","authors":"Yaojin Fan ,&nbsp;Bo You ,&nbsp;Jiayu Li ,&nbsp;Yufei Liu ,&nbsp;Chen Chen ,&nbsp;Xiaolei Chen ,&nbsp;Liang Ding","doi":"10.1016/j.robot.2025.105323","DOIUrl":"10.1016/j.robot.2025.105323","url":null,"abstract":"<div><div>This paper proposes a human–robot collaborative control method based on command-weighted fusion strategy for manned legged robot, addressing challenges posed by the complex structure of manned legged robots. These challenges affect both the safety of autonomous decision-making algorithms and the complexity of manual control. First, we design an autonomous command optimization method integrating terrain information and cost functions to enhance decision-making in complex terrains. Subsequently, a method for optimizing driving weighting factors is designed, utilizing a prior mechanism and rule knowledge base, while considering the influence of driver reliability and terrain complexity on driving safety and stability. Through analysis of human-machine driving intentions and the autonomous driving weighting factor, a commands weighted fusion strategy for human-machine commands is devised to achieve rational dynamic allocation of driving weighting and command fusion. Finally, validation through a human–robot collaborative control experiment demonstrates that the proposed control strategy effectively leverages the strengths of both human drivers and intelligent systems, yielding satisfactory control performance.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105323"},"PeriodicalIF":5.2,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the video quality captured by a surveillance mobile robot 监控移动机器人捕捉到的视频质量
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2026-01-03 DOI: 10.1016/j.robot.2026.105330
Adwaith Vijayakumar , Ishank Juneja , Leena Vachhani
In many robot surveillance applications, the major contributor to the quality degradation of the captured video is the unintended relative motion between the camera and the scene. This unintended motion induces an unintended effect called jitter in the captured video sequence. The evaluation of video quality captured by a mobile robot in surveillance scenarios is often application-specific and is often based on the amount of jitter obtained through feature tracking or camera path reconstruction or intensity patterns obtained across video frames. The contributions in this paper are two folds: development and benchmarking of a novel algorithm for video quality assessment, and jitter-specific recommendations for stabilization approaches. Unlike existing Video Quality Assessment (VQA) scores, the proposed Topology Score (TS) is a non-reference technique that does not involve feature tracking or camera path reconstruction, suitable for mobile robots used for surveillance. We adopt sliding window geometry using persistent homology concept for quantifying the jitter associated with the periodic/quasiperiodic oscillations induced by the moving mobile robots, which in turn gives a VQA score. The experimental results suggest that the trend of the proposed score aligns with the existing rhythm scores that correlate highly with human subjective evaluation, but needs reference video for the assessment. Additionally, we perform a comparative study on various video stabilization algorithms on three categories of robots based on the jitter characteristics: (1) Spherical robot videos with second-order damped oscillations causing low-frequency high-amplitude jitters, (2) Autonomous drone videos with intermittent jitters, and (3) Humanoids mimicked by the casual movements of hand-held video recorder (gait motions have a periodic structure) that contain high-frequency low-amplitude jitters from the recorder’s movement, using the proposed and existing VQA scores. We apply seven different stabilization approaches to the selected robot categories and quantify the tested algorithms’ output quality and resource requirements. Finally, we report the decision matrix based on the robot’s available resources to readily use the state-of-the-art stabilization methods in mobile robot surveillance. Our findings show that the proposed topology score is most suitable for evaluating videos captured by mobile robots in unknown environments due to non-reference assessment associated with periodic/quasiperiodic jitter, and the decision matrix to select a video stabilization algorithm based on the jitter characteristics of mobile robot for the quality improvement of captured video.
在许多机器人监控应用中,导致捕获视频质量下降的主要原因是摄像机和场景之间的意外相对运动。这种意想不到的运动在捕获的视频序列中引起了一种意想不到的效果,称为抖动。对移动机器人在监控场景中捕获的视频质量的评估通常是特定于应用程序的,并且通常基于通过特征跟踪或摄像机路径重建或跨视频帧获得的强度模式获得的抖动量。本文的贡献有两个方面:一种用于视频质量评估的新算法的开发和基准测试,以及针对稳定方法的抖动特定建议。与现有的视频质量评估(VQA)评分不同,本文提出的拓扑评分(TS)是一种不涉及特征跟踪或摄像机路径重建的非参考技术,适用于用于监控的移动机器人。我们采用滑动窗口几何,使用持久同调概念来量化与移动机器人引起的周期/准周期振荡相关的抖动,从而给出VQA分数。实验结果表明,提出的分数趋势与现有的节奏分数一致,与人类的主观评价高度相关,但需要参考视频进行评估。此外,我们根据抖动特性对三类机器人的各种视频稳定算法进行了比较研究:(1)具有二阶阻尼振荡导致低频高振幅抖动的球形机器人视频,(2)具有间歇性抖动的自主无人机视频,以及(3)由手持录像机(步态运动具有周期性结构)的随意运动模仿的类人机器人视频,其中包含来自记录器运动的高频低振幅抖动,使用提出的和现有的VQA分数。我们将七种不同的稳定方法应用于选定的机器人类别,并量化测试算法的输出质量和资源需求。最后,我们报告了基于机器人可用资源的决策矩阵,以便在移动机器人监控中易于使用最先进的稳定方法。研究结果表明,由于非参考评估与周期性/准周期性抖动相关,本文提出的拓扑评分最适合用于评估未知环境下移动机器人捕获的视频,并根据移动机器人的抖动特性选择视频稳定算法的决策矩阵,以提高捕获视频的质量。
{"title":"On the video quality captured by a surveillance mobile robot","authors":"Adwaith Vijayakumar ,&nbsp;Ishank Juneja ,&nbsp;Leena Vachhani","doi":"10.1016/j.robot.2026.105330","DOIUrl":"10.1016/j.robot.2026.105330","url":null,"abstract":"<div><div>In many robot surveillance applications, the major contributor to the quality degradation of the captured video is the unintended relative motion between the camera and the scene. This unintended motion induces an unintended effect called <em>jitter</em> in the captured video sequence. The evaluation of video quality captured by a mobile robot in surveillance scenarios is often application-specific and is often based on the amount of jitter obtained through feature tracking or camera path reconstruction or intensity patterns obtained across video frames. The contributions in this paper are two folds: development and benchmarking of a novel algorithm for video quality assessment, and jitter-specific recommendations for stabilization approaches. Unlike existing Video Quality Assessment (VQA) scores, the proposed Topology Score (TS) is a non-reference technique that does not involve feature tracking or camera path reconstruction, suitable for mobile robots used for surveillance. We adopt sliding window geometry using persistent homology concept for quantifying the jitter associated with the periodic/quasiperiodic oscillations induced by the moving mobile robots, which in turn gives a VQA score. The experimental results suggest that the trend of the proposed score aligns with the existing rhythm scores that correlate highly with human subjective evaluation, but needs reference video for the assessment. Additionally, we perform a comparative study on various video stabilization algorithms on three categories of robots based on the jitter characteristics: (1) Spherical robot videos with second-order damped oscillations causing low-frequency high-amplitude jitters, (2) Autonomous drone videos with intermittent jitters, and (3) Humanoids mimicked by the casual movements of hand-held video recorder (gait motions have a periodic structure) that contain high-frequency low-amplitude jitters from the recorder’s movement, using the proposed and existing VQA scores. We apply seven different stabilization approaches to the selected robot categories and quantify the tested algorithms’ output quality and resource requirements. Finally, we report the decision matrix based on the robot’s available resources to readily use the state-of-the-art stabilization methods in mobile robot surveillance. Our findings show that the proposed topology score is most suitable for evaluating videos captured by mobile robots in unknown environments due to non-reference assessment associated with periodic/quasiperiodic jitter, and the decision matrix to select a video stabilization algorithm based on the jitter characteristics of mobile robot for the quality improvement of captured video.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105330"},"PeriodicalIF":5.2,"publicationDate":"2026-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Curriculum-guided deep reinforcement learning with fuzzy rewards for autonomous push-grasp manipulation in cluttered environments 基于模糊奖励的课程引导深度强化学习在混乱环境下的自主推握操作
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.robot.2025.105325
Chih-Yung Huang , Adhan Efendi , Ying-Chun Wang
Robotic manipulation in cluttered environments requires robust coordination of pushing and grasping to overcome occlusions, constrained grasp geometries, and uncertain object interactions. This study presents a curriculum-guided deep reinforcement learning framework that jointly redesigns the training distribution, state abstraction, and reward structure for autonomous push–grasp manipulation. A depth-aware grasp potential module constructs a geometric affordance map that prioritizes feasible top-layer grasp opportunities, guiding the agent toward collision-free rearrangement behaviors. A fuzzy logic–based reward mechanism integrates changes in graspable area and grasp Q-values into a continuous shaping signal, addressing sparse feedback and stabilizing learning. A stage-wise curriculum with proportion-controlled difficulty progression gradually increases clutter density and object difficulty, enabling progressive acquisition of coordinated push–grasp skills. Extensive evaluations across randomized clutter, structured challenge scenarios, and real-world experiments on previously unseen and semi-transparent objects show that the proposed framework consistently outperforms VPG-based and grasp-quality baselines in grasp success and action efficiency. These results demonstrate the effectiveness of coupling curriculum design, depth-aware grasp prioritization, and fuzzy reward shaping for robust manipulation in complex, cluttered settings.
在混乱的环境中,机器人操作需要强大的推动和抓取协调,以克服遮挡、受限的抓取几何形状和不确定的物体相互作用。本研究提出了一个课程导向的深度强化学习框架,该框架共同重新设计了自主推握操作的训练分布、状态抽象和奖励结构。深度感知抓取潜力模块构建几何功能映射,优先考虑可行的顶层抓取机会,引导智能体进行无碰撞重排行为。基于模糊逻辑的奖励机制将可抓握面积和抓握q值的变化整合为一个连续的成形信号,解决了稀疏反馈和稳定学习的问题。一个阶段明智的课程与比例控制的难度进展逐渐增加杂乱密度和对象的难度,使逐步获得协调推抓技能。对随机杂波、结构化挑战场景的广泛评估,以及对以前看不见和半透明物体的真实世界实验表明,所提出的框架在抓取成功率和动作效率方面始终优于基于vpg和抓取质量基线。这些结果证明了耦合课程设计、深度感知掌握优先级和模糊奖励塑造在复杂、混乱环境下的鲁棒操作的有效性。
{"title":"Curriculum-guided deep reinforcement learning with fuzzy rewards for autonomous push-grasp manipulation in cluttered environments","authors":"Chih-Yung Huang ,&nbsp;Adhan Efendi ,&nbsp;Ying-Chun Wang","doi":"10.1016/j.robot.2025.105325","DOIUrl":"10.1016/j.robot.2025.105325","url":null,"abstract":"<div><div>Robotic manipulation in cluttered environments requires robust coordination of pushing and grasping to overcome occlusions, constrained grasp geometries, and uncertain object interactions. This study presents a curriculum-guided deep reinforcement learning framework that jointly redesigns the training distribution, state abstraction, and reward structure for autonomous push–grasp manipulation. A depth-aware grasp potential module constructs a geometric affordance map that prioritizes feasible top-layer grasp opportunities, guiding the agent toward collision-free rearrangement behaviors. A fuzzy logic–based reward mechanism integrates changes in graspable area and grasp Q-values into a continuous shaping signal, addressing sparse feedback and stabilizing learning. A stage-wise curriculum with proportion-controlled difficulty progression gradually increases clutter density and object difficulty, enabling progressive acquisition of coordinated push–grasp skills. Extensive evaluations across randomized clutter, structured challenge scenarios, and real-world experiments on previously unseen and semi-transparent objects show that the proposed framework consistently outperforms VPG-based and grasp-quality baselines in grasp success and action efficiency. These results demonstrate the effectiveness of coupling curriculum design, depth-aware grasp prioritization, and fuzzy reward shaping for robust manipulation in complex, cluttered settings.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105325"},"PeriodicalIF":5.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable Nonsingularity Adaptive fixed-time sliding mode control under input saturation for an uncertain robotic manipulator 不确定机械臂输入饱和下可靠的非奇异自适应定时滑模控制
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-31 DOI: 10.1016/j.robot.2025.105326
Van Tinh Nguyen , Thanh Tung Bui , Hai Yen Pham , Ngoc Thanh Pham , Dang-Khoa Nguyen , Saleh Mobayen
This paper proposes a novel reliable terminal sliding mode control (TSMC) scheme for an uncertain robotic manipulator that is susceptible to parameter uncertainties, external disturbances, and input saturation. The suggested approach guarantees constant time tracking errors and non-singular convergence regardless of initial conditions by combining the classic pole placement technique with a well-designed sliding manifold. Both the unknown nonlinear dynamics and uncertainties are approximated using a radial basis function neural network (RBFNN), and the effects of input saturation are lessened by an appropriate solution. The system states converge to a small neighborhood of the origin in a limited amount of time, according to theoretical analysis based on Lyapunov stability lemmas and constant time stability. Simulation results confirm the superior performance of the proposed approach compared to existing methods, showing better accuracy, reduced chatter, and saved energy. This control strategy offers a practical and effective solution for high-precision path tracking in robotic systems operating in challenging environments.
针对易受参数不确定性、外部干扰和输入饱和影响的不确定机器人,提出了一种可靠的终端滑模控制(TSMC)方案。该方法通过将经典的极点配置技术与设计良好的滑动流形相结合,保证了恒定的时间跟踪误差和非奇异收敛性,而不管初始条件如何。利用径向基函数神经网络(RBFNN)逼近未知的非线性动力学和不确定性,并通过适当的解减小输入饱和的影响。根据李雅普诺夫稳定性引理和常时间稳定性的理论分析,系统状态在有限的时间内收敛到原点的小邻域。仿真结果表明,与现有方法相比,该方法具有更高的精度,减少了颤振,节省了能量。该控制策略为机器人系统在复杂环境下的高精度路径跟踪提供了一种实用有效的解决方案。
{"title":"Reliable Nonsingularity Adaptive fixed-time sliding mode control under input saturation for an uncertain robotic manipulator","authors":"Van Tinh Nguyen ,&nbsp;Thanh Tung Bui ,&nbsp;Hai Yen Pham ,&nbsp;Ngoc Thanh Pham ,&nbsp;Dang-Khoa Nguyen ,&nbsp;Saleh Mobayen","doi":"10.1016/j.robot.2025.105326","DOIUrl":"10.1016/j.robot.2025.105326","url":null,"abstract":"<div><div>This paper proposes a novel reliable terminal sliding mode control (TSMC) scheme for an uncertain robotic manipulator that is susceptible to parameter uncertainties, external disturbances, and input saturation. The suggested approach guarantees constant time tracking errors and non-singular convergence regardless of initial conditions by combining the classic pole placement technique with a well-designed sliding manifold. Both the unknown nonlinear dynamics and uncertainties are approximated using a radial basis function neural network (RBFNN), and the effects of input saturation are lessened by an appropriate solution. The system states converge to a small neighborhood of the origin in a limited amount of time, according to theoretical analysis based on Lyapunov stability lemmas and constant time stability. Simulation results confirm the superior performance of the proposed approach compared to existing methods, showing better accuracy, reduced chatter, and saved energy. This control strategy offers a practical and effective solution for high-precision path tracking in robotic systems operating in challenging environments.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105326"},"PeriodicalIF":5.2,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The rapid rise of soft robotics in surgical operations: Trends, challenges, and future directions 软机器人在外科手术中的迅速崛起:趋势、挑战和未来方向
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-30 DOI: 10.1016/j.robot.2025.105324
Babatunde Olamide Omiyale , Olamide Femi Akinsola , Muhammad Awais Ashraf , Niyi Gideon Olaiya , Akinola Ogbeyemi , Wenjun Chris Zhang
This paper investigates the transformative impact of soft robotics on surgical operations, particularly in the development of next-generation minimally invasive techniques. Conventional surgical procedures are often influenced by various factors, such as patient positioning, the precision of surgical instruments, the surgeon’s experience, and physical conditions. These factors can make it challenging to accurately execute predetermined surgical plans, which can inevitably reduce surgical precision and safety. To address these challenges, soft robotic systems that mimic the flexibility and adaptability of biological tissues provide significant advantages over conventional rigid tools. These advantages include enhanced dexterity, reduced tissue trauma, and improved patient outcomes. Soft robots are made from compliant materials (e.g., silicone, hydrogels), which make them gentler on delicate tissues and organs. They can navigate tight or sensitive areas (e.g., the brain, heart, abdomen), allow for smaller incisions, minimize blood loss, reduce the risk of infection, and minimize recovery time, scarring, and human error caused by tremors or physical strain. This review examines recent advancements in soft robotics, clinical applications, addresses technological challenges, and identifies future directions for integrating soft robotics into mainstream surgical practice.
本文研究了软机器人技术对外科手术的变革性影响,特别是在下一代微创技术的发展中。传统的外科手术常常受到各种因素的影响,如病人的体位、手术器械的精度、外科医生的经验和身体状况。这些因素都给手术计划的准确执行带来了挑战,不可避免地降低了手术的精度和安全性。为了应对这些挑战,模仿生物组织的灵活性和适应性的软机器人系统比传统的刚性工具具有显著的优势。这些优点包括增强灵活性,减少组织创伤,改善患者预后。软机器人是由柔性材料(如硅树脂、水凝胶)制成的,这使得它们对脆弱的组织和器官更温和。它们可以导航狭窄或敏感的区域(例如,大脑、心脏、腹部),允许更小的切口,最大限度地减少失血,降低感染风险,并最大限度地减少恢复时间、疤痕和由震颤或身体紧张引起的人为错误。本文综述了软机器人技术、临床应用的最新进展,解决了技术挑战,并确定了将软机器人技术融入主流外科实践的未来方向。
{"title":"The rapid rise of soft robotics in surgical operations: Trends, challenges, and future directions","authors":"Babatunde Olamide Omiyale ,&nbsp;Olamide Femi Akinsola ,&nbsp;Muhammad Awais Ashraf ,&nbsp;Niyi Gideon Olaiya ,&nbsp;Akinola Ogbeyemi ,&nbsp;Wenjun Chris Zhang","doi":"10.1016/j.robot.2025.105324","DOIUrl":"10.1016/j.robot.2025.105324","url":null,"abstract":"<div><div>This paper investigates the transformative impact of soft robotics on surgical operations, particularly in the development of next-generation minimally invasive techniques. Conventional surgical procedures are often influenced by various factors, such as patient positioning, the precision of surgical instruments, the surgeon’s experience, and physical conditions. These factors can make it challenging to accurately execute predetermined surgical plans, which can inevitably reduce surgical precision and safety. To address these challenges, soft robotic systems that mimic the flexibility and adaptability of biological tissues provide significant advantages over conventional rigid tools. These advantages include enhanced dexterity, reduced tissue trauma, and improved patient outcomes. Soft robots are made from compliant materials (e.g., silicone, hydrogels), which make them gentler on delicate tissues and organs. They can navigate tight or sensitive areas (e.g., the brain, heart, abdomen), allow for smaller incisions, minimize blood loss, reduce the risk of infection, and minimize recovery time, scarring, and human error caused by tremors or physical strain. This review examines recent advancements in soft robotics, clinical applications, addresses technological challenges, and identifies future directions for integrating soft robotics into mainstream surgical practice.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105324"},"PeriodicalIF":5.2,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145978235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Euclidean distance field properties for fast and safe 3D planning with a modified Lazy Theta* 利用欧几里得距离场属性,使用改进的Lazy Theta*进行快速安全的3D规划
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-29 DOI: 10.1016/j.robot.2025.105317
Jose A. Cobano, L. Merino, F. Caballero
This paper presents the FS-Planner, a fast graph-search planner based on a modified Lazy Theta* algorithm that exploits the analytical properties of Euclidean Distance Fields (EDFs). We introduce a new cost function that integrates an EDF-based term proven to satisfy the triangle inequality, enabling efficient parent selection and reducing computation time while generating safe paths with smaller heading variations. We also derive an analytic approximation of the EDF integral along a segment and analyse the influence of the line-of-sight limit on the approximation error, motivating the use of a bounded visibility range. Furthermore, we propose a gradient-based neighbour-selection mechanism that decreases the number of explored nodes and improves computational performance without degrading safety or path quality. The FS-Planner produces safe paths with small heading changes without requiring the use of post-processing methods. Extensive experiments and comparisons in challenging 3D indoor simulation environments, complemented by tests in real-world outdoor environments, are used to evaluate and validate the FS-Planner. The results show consistent improvements in computation time, exploration efficiency, safety, and smoothness in a geometric sense compared with baseline heuristic planners, while maintaining sub-optimality within acceptable bounds. Finally, the proposed EDF-based cost formulation is orthogonal to the underlying search method and can be incorporated into other planning paradigms.
本文提出了一种基于改进的Lazy Theta*算法的快速图搜索规划器FS-Planner,该算法利用了欧几里得距离场(edf)的解析特性。我们引入了一个新的成本函数,该函数集成了一个基于edf的项,该项已被证明满足三角形不等式,能够有效地选择父路径,减少计算时间,同时生成具有较小航向变化的安全路径。我们还推导了EDF积分沿一段的解析近似,并分析了视距限制对近似误差的影响,从而激发了有界可见范围的使用。此外,我们提出了一种基于梯度的邻居选择机制,该机制在不降低安全性或路径质量的情况下减少了探索节点的数量并提高了计算性能。FS-Planner在不需要使用后处理方法的情况下,产生具有小标题变化的安全路径。在具有挑战性的3D室内模拟环境中进行了大量的实验和比较,并在真实的室外环境中进行了测试,以评估和验证FS-Planner。结果表明,与基线启发式规划器相比,在计算时间、勘探效率、安全性和几何意义上的平滑性方面有了一致的改进,同时在可接受的范围内保持了次优性。最后,提出的基于edf的成本公式与基础搜索方法正交,可以纳入其他规划范例。
{"title":"Exploiting Euclidean distance field properties for fast and safe 3D planning with a modified Lazy Theta*","authors":"Jose A. Cobano,&nbsp;L. Merino,&nbsp;F. Caballero","doi":"10.1016/j.robot.2025.105317","DOIUrl":"10.1016/j.robot.2025.105317","url":null,"abstract":"<div><div>This paper presents the FS-Planner, a fast graph-search planner based on a modified Lazy Theta* algorithm that exploits the analytical properties of Euclidean Distance Fields (EDFs). We introduce a new cost function that integrates an EDF-based term proven to satisfy the triangle inequality, enabling efficient parent selection and reducing computation time while generating safe paths with smaller heading variations. We also derive an analytic approximation of the EDF integral along a segment and analyse the influence of the line-of-sight limit on the approximation error, motivating the use of a bounded visibility range. Furthermore, we propose a gradient-based neighbour-selection mechanism that decreases the number of explored nodes and improves computational performance without degrading safety or path quality. The FS-Planner produces safe paths with small heading changes without requiring the use of post-processing methods. Extensive experiments and comparisons in challenging 3D indoor simulation environments, complemented by tests in real-world outdoor environments, are used to evaluate and validate the FS-Planner. The results show consistent improvements in computation time, exploration efficiency, safety, and smoothness in a geometric sense compared with baseline heuristic planners, while maintaining sub-optimality within acceptable bounds. Finally, the proposed EDF-based cost formulation is orthogonal to the underlying search method and can be incorporated into other planning paradigms.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105317"},"PeriodicalIF":5.2,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145940044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MirrorNet: Hallucinating 2.5D depth images for efficient 3D scene reconstruction MirrorNet:幻觉2.5D深度图像,用于高效的3D场景重建
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-27 DOI: 10.1016/j.robot.2025.105321
Rafał Staszak, Bartlomiej Kulecki, Marek Kraft, Dominik Belter
Robots face challenges in perceiving new scenes, particularly when registering objects from a single perspective, resulting in incomplete shape information about objects. Partial object models negatively influence the performance of grasping methods. To address this, robots can scan the scene from various perspectives or employ methods to directly fill in unknown regions. This research reexamines scene reconstruction typically formulated in 3D space, proposing a novel formulation in 2D image space for robots with RGB-D cameras. We introduce a method that generates a depth image from a virtual camera pose located on the opposite position of the reconstructed object. The article demonstrates that the convolutional neural network can be trained for accurate depth image generation and subsequent 3D scene reconstruction from a single viewpoint. We show that the proposed approach is computationally efficient and accurate when compared to methods that operate directly in 3D space. Furthermore, we illustrate the application of this model in enhancing grasping method success rates.
机器人在感知新场景时面临挑战,特别是在从单一角度注册物体时,导致物体的形状信息不完整。局部对象模型会对抓取方法的性能产生负面影响。为了解决这个问题,机器人可以从不同的角度扫描场景,或者采用直接填充未知区域的方法。本研究重新审视了通常在3D空间中制定的场景重建,提出了一种用于具有RGB-D相机的机器人的2D图像空间中的新公式。我们介绍了一种从位于重建对象相反位置的虚拟摄像机姿态生成深度图像的方法。本文证明了卷积神经网络可以训练为精确的深度图像生成和随后的3D场景重建从单一视点。我们表明,与直接在3D空间中操作的方法相比,所提出的方法具有计算效率和准确性。此外,我们还说明了该模型在提高抓取方法成功率方面的应用。
{"title":"MirrorNet: Hallucinating 2.5D depth images for efficient 3D scene reconstruction","authors":"Rafał Staszak,&nbsp;Bartlomiej Kulecki,&nbsp;Marek Kraft,&nbsp;Dominik Belter","doi":"10.1016/j.robot.2025.105321","DOIUrl":"10.1016/j.robot.2025.105321","url":null,"abstract":"<div><div>Robots face challenges in perceiving new scenes, particularly when registering objects from a single perspective, resulting in incomplete shape information about objects. Partial object models negatively influence the performance of grasping methods. To address this, robots can scan the scene from various perspectives or employ methods to directly fill in unknown regions. This research reexamines scene reconstruction typically formulated in 3D space, proposing a novel formulation in 2D image space for robots with RGB-D cameras. We introduce a method that generates a depth image from a virtual camera pose located on the opposite position of the reconstructed object. The article demonstrates that the convolutional neural network can be trained for accurate depth image generation and subsequent 3D scene reconstruction from a single viewpoint. We show that the proposed approach is computationally efficient and accurate when compared to methods that operate directly in 3D space. Furthermore, we illustrate the application of this model in enhancing grasping method success rates.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105321"},"PeriodicalIF":5.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced flexibility and dexterity in robotic endoscopy via a 6-DOF parallel mechanism and eye-gaze-assisted field-of-view control 通过六自由度并联机构和眼球辅助视场控制,提高了机器人内窥镜的灵活性和灵活性
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-27 DOI: 10.1016/j.robot.2025.105322
Mengtang Li , Shen Zhao , Shuai Wang , Fanmao Liu
In conventional minimally invasive surgery, an assistant manually steers the endoscope based on the surgeon’s verbal commands, but fatigue and tremor can degrade field-of-view (FOV) stability and efficiency. Robotic endoscopes address this limitation through automated FOV adjustment via image-based visual servoing, ensuring smooth and stable visualization. However, most robotic implementations mount rigid straight-rod endoscopes on external serial arms, limiting dexterity and complicating remote-center-of-motion (RCM) control. Moreover, many automated FOV methods track surgical-tool tips without representing the surgeon’s intention. This work therefore presents a compact 6-DOF parallel endoscopic mechanism that improves flexibility and dexterity while simplifying RCM constraint satisfaction, together with an eye-gaze-assisted multi-tool tracking controller that dynamically weights tools according to surgeon attention. Simulations and experiments across diverse scenarios demonstrate FOV stabilization within 2 s, mean image-space tracking error < 20 pixels, eye hand error < 3°, and at least a 30% reduction in unnecessary FOV adjustments. Supplementary video is available.
在传统的微创手术中,助手根据外科医生的口头指令手动操纵内窥镜,但疲劳和震颤会降低视野(FOV)的稳定性和效率。机器人内窥镜通过基于图像的视觉伺服自动调整视场来解决这一限制,确保平滑和稳定的可视化。然而,大多数机器人实现在外部串行臂上安装刚性直杆内窥镜,限制了灵活性并使远程运动中心(RCM)控制复杂化。此外,许多自动化的FOV方法跟踪手术工具提示,而不代表外科医生的意图。因此,这项工作提出了一个紧凑的6自由度平行内窥镜机构,提高了灵活性和灵活性,同时简化了RCM约束的满足,以及一个眼睛辅助的多工具跟踪控制器,根据外科医生的注意力动态加权工具。在不同场景下的模拟和实验表明,视场稳定在2秒内,平均图像空间跟踪误差为20像素,眼手误差为3°,并且至少减少了30%的不必要的视场调整。补充视频是可用的。
{"title":"Enhanced flexibility and dexterity in robotic endoscopy via a 6-DOF parallel mechanism and eye-gaze-assisted field-of-view control","authors":"Mengtang Li ,&nbsp;Shen Zhao ,&nbsp;Shuai Wang ,&nbsp;Fanmao Liu","doi":"10.1016/j.robot.2025.105322","DOIUrl":"10.1016/j.robot.2025.105322","url":null,"abstract":"<div><div>In conventional minimally invasive surgery, an assistant manually steers the endoscope based on the surgeon’s verbal commands, but fatigue and tremor can degrade field-of-view (FOV) stability and efficiency. Robotic endoscopes address this limitation through automated FOV adjustment via image-based visual servoing, ensuring smooth and stable visualization. However, most robotic implementations mount rigid straight-rod endoscopes on external serial arms, limiting dexterity and complicating remote-center-of-motion (RCM) control. Moreover, many automated FOV methods track surgical-tool tips without representing the surgeon’s intention. This work therefore presents a compact 6-DOF parallel endoscopic mechanism that improves flexibility and dexterity while simplifying RCM constraint satisfaction, together with an eye-gaze-assisted multi-tool tracking controller that dynamically weights tools according to surgeon attention. Simulations and experiments across diverse scenarios demonstrate FOV stabilization within 2 s, mean image-space tracking error <span><math><mo>&lt;</mo></math></span> 20 pixels, eye hand error <span><math><mo>&lt;</mo></math></span> 3°, and at least a 30% reduction in unnecessary FOV adjustments. Supplementary video is available.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105322"},"PeriodicalIF":5.2,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reference-guided image inpainting via progressive feature interaction and reconstruction for mobile robots with binocular cameras 基于渐进式特征交互与重构的双目移动机器人参考引导图像绘制
IF 5.2 2区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS Pub Date : 2025-12-26 DOI: 10.1016/j.robot.2025.105320
Jingyi Liu , Hengyu Li , Hang Liu , Shaorong Xie , Jun Luo
Image inpainting is a critical technique for recovering missing information caused by camera soiling on mobile robots. However, most existing learning-based methods still struggle to handle damaged images with complex semantic environments and diverse hole patterns, primarily because of the insufficient acquisition and inadequate fusion of scene-consistent prior cues for damaged images. To address this limitation, we propose a novel reference-guided image inpainting network (RGI2N) for mobile robots equipped with binocular cameras, which employs adjacent camera images as inpainting guidance and fuses its prior information via progressive feature interaction to reconstruct damaged regions. Specifically, a back-projection-based feature interaction module (FIM) is proposed to align the features of the reference and damaged images, thereby capturing the contextual information of the reference image for inpainting. Additionally, a content reconstruction module (CRM) based on residual learning and channel attention is presented to selectively aggregate interactive features for reconstructing missing details. Building upon these two modules, we further devise a progressive feature interaction and reconstruction module (PFIRM) that organizes multiple FIM-CRM pairs into a stepwise structure, enabling the progressive fusion of multiscale contextual information derived from both the damaged and reference images. Moreover, a feature refinement module (FRM) is developed to interact with low-level fine-grained features and refine the reconstructed details. Extensive evaluations conducted on the public ETHZ dataset and our self-built MII dataset demonstrate that RGI2N outperforms other state-of-the-art approaches and produces high-quality inpainting results on real soiled data.
图像补漆是修复移动机器人因相机污染而导致的信息缺失的关键技术。然而,大多数基于学习的方法仍然难以处理具有复杂语义环境和多种孔洞模式的受损图像,主要原因是对受损图像的场景一致性先验线索的获取和融合不足。为了解决这一限制,我们提出了一种新的参考引导图像修复网络(RGI2N),用于配备双目摄像机的移动机器人,该网络采用相邻摄像机图像作为修复引导,并通过渐进特征交互融合其先验信息来重建受损区域。具体而言,提出了一种基于反投影的特征交互模块(FIM),将参考图像和受损图像的特征对齐,从而捕获参考图像的上下文信息进行修复。此外,提出了一个基于残差学习和通道关注的内容重构模块(CRM),选择性地聚合交互特征以重建缺失的细节。在这两个模块的基础上,我们进一步设计了一个渐进式特征交互和重建模块(PFIRM),该模块将多个FIM-CRM对组织成一个逐步结构,从而实现来自损坏图像和参考图像的多尺度上下文信息的渐进式融合。此外,开发了特征细化模块(FRM),与底层细粒度特征交互,对重构细节进行细化。对公共ETHZ数据集和我们自建的MII数据集进行的广泛评估表明,RGI2N优于其他最先进的方法,并在实际污染数据上产生高质量的喷漆结果。
{"title":"Reference-guided image inpainting via progressive feature interaction and reconstruction for mobile robots with binocular cameras","authors":"Jingyi Liu ,&nbsp;Hengyu Li ,&nbsp;Hang Liu ,&nbsp;Shaorong Xie ,&nbsp;Jun Luo","doi":"10.1016/j.robot.2025.105320","DOIUrl":"10.1016/j.robot.2025.105320","url":null,"abstract":"<div><div>Image inpainting is a critical technique for recovering missing information caused by camera soiling on mobile robots. However, most existing learning-based methods still struggle to handle damaged images with complex semantic environments and diverse hole patterns, primarily because of the insufficient acquisition and inadequate fusion of scene-consistent prior cues for damaged images. To address this limitation, we propose a novel reference-guided image inpainting network (<span><math><mrow><msup><mrow><mi>RGI</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>N</mi></mrow></math></span>) for mobile robots equipped with binocular cameras, which employs adjacent camera images as inpainting guidance and fuses its prior information via progressive feature interaction to reconstruct damaged regions. Specifically, a back-projection-based feature interaction module (FIM) is proposed to align the features of the reference and damaged images, thereby capturing the contextual information of the reference image for inpainting. Additionally, a content reconstruction module (CRM) based on residual learning and channel attention is presented to selectively aggregate interactive features for reconstructing missing details. Building upon these two modules, we further devise a progressive feature interaction and reconstruction module (PFIRM) that organizes multiple FIM-CRM pairs into a stepwise structure, enabling the progressive fusion of multiscale contextual information derived from both the damaged and reference images. Moreover, a feature refinement module (FRM) is developed to interact with low-level fine-grained features and refine the reconstructed details. Extensive evaluations conducted on the public ETHZ dataset and our self-built MII dataset demonstrate that <span><math><mrow><msup><mrow><mi>RGI</mi></mrow><mrow><mn>2</mn></mrow></msup><mi>N</mi></mrow></math></span> outperforms other state-of-the-art approaches and produces high-quality inpainting results on real soiled data.</div></div>","PeriodicalId":49592,"journal":{"name":"Robotics and Autonomous Systems","volume":"198 ","pages":"Article 105320"},"PeriodicalIF":5.2,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145842778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics and Autonomous Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1