首页 > 最新文献

IEEE Robotics and Automation Letters最新文献

英文 中文
Precise Mobile Manipulation of Small Everyday Objects 精确移动操作的小日常物品
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-22 DOI: 10.1109/LRA.2026.3656784
Arjun Gupta;Rishik Sathua;Saurabh Gupta
Many everyday mobile manipulation tasks require precise interaction with small objects, such as grasping a knob to open a cabinet or pressing a light switch. In this letter, we develop Visual Servoing with Vision Models (VSVM), a closed-loop framework that enables a mobile manipulator to tackle such precise tasks involving the manipulation of small objects. VSVM uses state-of-the-art vision foundation models to generate 3D targets for visual servoing to enable diverse tasks in novel environments. Naively doing so fails because of occlusion by the end-effector. VSVM mitigates this using vision models that out-paint the end-effector thereby significantly enhancing target localization. We demonstrate that aided by out-painting methods, open-vocabulary object detectors can serve as a drop-in module for VSVM to seek semantic targets (e.g. knobs) and point tracking methods can help VSVM reliably pursue interaction sites indicated by user clicks. We conduct a large-scale evaluation spanning experiments in 10 novel environments across 6 buildings including 72 different object instances. VSVM obtains a 71% zero-shot success rate on manipulating unseen objects in novel environments in the real world, outperforming an open-loop control method by an absolute 42% and an imitation learning baseline trained on 1000+ demonstrations also by an absolute success rate of 50% .
许多日常的移动操作任务需要与小物体进行精确的交互,例如抓住旋钮打开橱柜或按下电灯开关。在这封信中,我们开发了带有视觉模型的视觉伺服(VSVM),这是一个闭环框架,使移动机械手能够处理涉及操纵小物体的精确任务。VSVM使用最先进的视觉基础模型来生成用于视觉伺服的3D目标,以实现在新环境中的各种任务。由于末端执行器的遮挡,天真地这样做会失败。VSVM使用的视觉模型比末端执行器更清晰,从而显著增强了目标定位。我们证明了在外画方法的辅助下,开放词汇对象检测器可以作为VSVM的插入模块来寻找语义目标(如按钮),点跟踪方法可以帮助VSVM可靠地追踪用户点击所指示的交互站点。我们在6座建筑的10个新环境中进行了大规模的评估,包括72个不同的对象实例。在现实世界的新环境中,VSVM在操纵看不见的物体时获得了71%的零射击成功率,绝对优于开环控制方法42%,并且在1000多个演示中训练的模仿学习基线也获得了50%的绝对成功率。
{"title":"Precise Mobile Manipulation of Small Everyday Objects","authors":"Arjun Gupta;Rishik Sathua;Saurabh Gupta","doi":"10.1109/LRA.2026.3656784","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656784","url":null,"abstract":"Many everyday mobile manipulation tasks require precise interaction with small objects, such as grasping a knob to open a cabinet or pressing a light switch. In this letter, we develop Visual Servoing with Vision Models (VSVM), a closed-loop framework that enables a mobile manipulator to tackle such precise tasks involving the manipulation of small objects. VSVM uses state-of-the-art vision foundation models to generate 3D targets for visual servoing to enable diverse tasks in novel environments. Naively doing so fails because of occlusion by the end-effector. VSVM mitigates this using vision models that out-paint the end-effector thereby significantly enhancing target localization. We demonstrate that aided by out-painting methods, open-vocabulary object detectors can serve as a drop-in module for VSVM to seek semantic targets (e.g. knobs) and point tracking methods can help VSVM reliably pursue interaction sites indicated by user clicks. We conduct a large-scale evaluation spanning experiments in 10 novel environments across 6 buildings including 72 different object instances. VSVM obtains a 71% zero-shot success rate on manipulating unseen objects in novel environments in the real world, outperforming an open-loop control method by an absolute 42% and an imitation learning baseline trained on 1000+ demonstrations also by an absolute success rate of 50% .","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3214-3221"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral Adapter for Semantic Segmentation With Vision Foundation Models 基于视觉基础模型的语义分割高光谱适配器
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-22 DOI: 10.1109/LRA.2026.3656795
Juana Valeria Hurtado;Rohit Mohan;Abhinav Valada
Hyperspectral imaging (HSI) captures spatial information along with dense spectral measurements across numerous narrow wavelength bands. This rich spectral content has the potential to facilitate robust robotic perception, particularly in environments with complex material compositions, varying illumination, or other visually challenging conditions. However, current HSI semantic segmentation methods underperform due to their reliance on architectures and learning frameworks optimized for RGB inputs. In this work, we propose a novel hyperspectral adapter that leverages pretrained vision foundation models to effectively learn from hyperspectral data. Our architecture incorporates a spectral transformer and a spectrum-aware spatial prior module to extract rich spatial-spectral features. Additionally, we introduce a modality-aware interaction block that facilitates effective integration of hyperspectral representations and frozen vision Transformer features through dedicated extraction and injection mechanisms. Extensive evaluations on three benchmark autonomous driving datasets demonstrate that our architecture achieves state-of-the-art semantic segmentation performance while directly using HSI inputs, outperforming both vision-based and hyperspectral segmentation methods.
高光谱成像(HSI)在众多窄波段上进行密集光谱测量,从而捕获空间信息。这种丰富的光谱内容有可能促进强大的机器人感知,特别是在复杂材料成分、不同照明或其他视觉挑战性条件的环境中。然而,目前的HSI语义分割方法由于依赖于针对RGB输入优化的架构和学习框架而表现不佳。在这项工作中,我们提出了一种新的高光谱适配器,它利用预训练的视觉基础模型来有效地从高光谱数据中学习。我们的架构结合了一个光谱转换器和一个光谱感知空间先验模块来提取丰富的空间光谱特征。此外,我们引入了一个模态感知交互块,通过专用的提取和注入机制,促进了高光谱表示和冻结视觉Transformer特征的有效集成。对三个基准自动驾驶数据集的广泛评估表明,我们的架构在直接使用HSI输入的情况下实现了最先进的语义分割性能,优于基于视觉和高光谱的分割方法。
{"title":"Hyperspectral Adapter for Semantic Segmentation With Vision Foundation Models","authors":"Juana Valeria Hurtado;Rohit Mohan;Abhinav Valada","doi":"10.1109/LRA.2026.3656795","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656795","url":null,"abstract":"Hyperspectral imaging (HSI) captures spatial information along with dense spectral measurements across numerous narrow wavelength bands. This rich spectral content has the potential to facilitate robust robotic perception, particularly in environments with complex material compositions, varying illumination, or other visually challenging conditions. However, current HSI semantic segmentation methods underperform due to their reliance on architectures and learning frameworks optimized for RGB inputs. In this work, we propose a novel hyperspectral adapter that leverages pretrained vision foundation models to effectively learn from hyperspectral data. Our architecture incorporates a spectral transformer and a spectrum-aware spatial prior module to extract rich spatial-spectral features. Additionally, we introduce a modality-aware interaction block that facilitates effective integration of hyperspectral representations and frozen vision Transformer features through dedicated extraction and injection mechanisms. Extensive evaluations on three benchmark autonomous driving datasets demonstrate that our architecture achieves state-of-the-art semantic segmentation performance while directly using HSI inputs, outperforming both vision-based and hyperspectral segmentation methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3606-3613"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VisionSafeEnhanced VPC: Cautious Predictive Control With Visibility Constraints Under Uncertainty for Autonomous Robotic Surgery VisionSafeEnhanced VPC:不确定性下自主机器人手术的可视性约束谨慎预测控制
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-22 DOI: 10.1109/LRA.2026.3656774
Jiayin Wang;Yanran Wei;Lei Jiang;Xiaoyu Guo;Ayong Zheng;Weidong Zhao;Zhongkui Li
Autonomous control of the laparoscope in robot-assisted Minimally Invasive Surgery (MIS) has received considerable research interest due to its potential to improve surgical safety. Despite progress in pixel-level Image-Based Visual Servoing (IBVS) control, the requirement of continuous visibility and the existence of complex disturbances, such as parameterization error, measurement noise, and uncertainties of payloads, could degrade the surgeon’s visual experience and compromise procedural safety. To address these limitations, this letter proposes VisionSafeEnhanced Visual Predictive Control (VPC), a robust and uncertainty-adaptive framework that guarantees Field of View (FoV) safety under uncertainty. Firstly, Gaussian Process Regression (GPR) is utilized to perform hybrid quantification of operational uncertainties including residual model uncertainties, stochastic uncertainties, and external disturbances. Based on uncertainty quantification, a novel safety-aware trajectory optimization framework with probabilistic guarantees is proposed, where an uncertainty-adaptive safety Control Barrier Function (CBF) condition is given based on uncertainty propagation, and chance constraints are simultaneously formulated based on probabilistic approximation. This uncertainty aware formulation enables adaptive control effort allocation, minimizing unnecessary camera motion while maintaining robustness. The proposed method is validated through comparative simulations and experiments on a commercial surgical robot platform (MicroPort MedBot Toumai) performing a sequential multi-target lymph node dissection. Compared with baseline methods, the framework maintains near-perfect target visibility ($> 99.9%$), reduces tracking errors by over 77% under uncertainty, and lowers control effort by more than an order of magnitude.
机器人辅助微创手术(MIS)中腹腔镜的自主控制因其提高手术安全性的潜力而受到了广泛的研究兴趣。尽管基于图像的视觉伺服(IBVS)在像素级控制方面取得了进展,但持续可见性的要求和复杂干扰的存在,如参数化误差、测量噪声和有效载荷的不确定性,可能会降低外科医生的视觉体验并危及手术安全。为了解决这些限制,本信函提出了VisionSafeEnhanced Visual Predictive Control (VPC),这是一种鲁棒的不确定性自适应框架,可保证不确定性下的视场(FoV)安全。首先,利用高斯过程回归(GPR)对残差模型不确定性、随机不确定性和外部干扰等操作不确定性进行混合量化;在不确定性量化的基础上,提出了一种具有概率保证的安全感知轨迹优化框架,该框架基于不确定性传播给出了不确定性自适应安全控制障碍函数(CBF)条件,同时基于概率逼近给出了机会约束。这种不确定性意识的配方使自适应控制努力分配,最大限度地减少不必要的相机运动,同时保持鲁棒性。通过商业手术机器人平台(MicroPort MedBot Toumai)进行顺序多靶点淋巴结清扫的对比模拟和实验,验证了所提出的方法。与基线方法相比,该框架保持了近乎完美的目标可见性(99.9%),在不确定性下将跟踪误差降低了77%以上,并将控制工作量降低了一个数量级以上。
{"title":"VisionSafeEnhanced VPC: Cautious Predictive Control With Visibility Constraints Under Uncertainty for Autonomous Robotic Surgery","authors":"Jiayin Wang;Yanran Wei;Lei Jiang;Xiaoyu Guo;Ayong Zheng;Weidong Zhao;Zhongkui Li","doi":"10.1109/LRA.2026.3656774","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656774","url":null,"abstract":"Autonomous control of the laparoscope in robot-assisted Minimally Invasive Surgery (MIS) has received considerable research interest due to its potential to improve surgical safety. Despite progress in pixel-level Image-Based Visual Servoing (IBVS) control, the requirement of continuous visibility and the existence of complex disturbances, such as parameterization error, measurement noise, and uncertainties of payloads, could degrade the surgeon’s visual experience and compromise procedural safety. To address these limitations, this letter proposes VisionSafeEnhanced Visual Predictive Control (VPC), a robust and uncertainty-adaptive framework that guarantees Field of View (FoV) safety under uncertainty. Firstly, Gaussian Process Regression (GPR) is utilized to perform hybrid quantification of operational uncertainties including residual model uncertainties, stochastic uncertainties, and external disturbances. Based on uncertainty quantification, a novel safety-aware trajectory optimization framework with probabilistic guarantees is proposed, where an uncertainty-adaptive safety Control Barrier Function (CBF) condition is given based on uncertainty propagation, and chance constraints are simultaneously formulated based on probabilistic approximation. This uncertainty aware formulation enables adaptive control effort allocation, minimizing unnecessary camera motion while maintaining robustness. The proposed method is validated through comparative simulations and experiments on a commercial surgical robot platform (MicroPort MedBot Toumai) performing a sequential multi-target lymph node dissection. Compared with baseline methods, the framework maintains near-perfect target visibility (<inline-formula><tex-math>$&gt; 99.9%$</tex-math></inline-formula>), reduces tracking errors by over 77% under uncertainty, and lowers control effort by more than an order of magnitude.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3590-3597"},"PeriodicalIF":5.3,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safe Multimodal Replanning via Projection-Based Trajectory Clustering in Crowded Environments 拥挤环境下基于投影轨迹聚类的安全多模式重规划
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-21 DOI: 10.1109/LRA.2026.3656780
Yongjae Lim;Seungwoo Jung;Dabin Kim;Dongjae Lee;H. Jin Kim
Fast replanning of the local trajectory is essential for autonomous robots to ensure safe navigation in crowded environments, as such environments require the robot to frequently update its trajectory due to unexpected and dynamic obstacles. In such settings, relying on the single trajectory optimization may not provide sufficient alternatives, making it harder to quickly switch to a safer trajectory and increasing the risk of collisions. While parallel trajectory optimization can address this limitation by considering multiple candidates, it depends heavily on well-defined initial guidance, which is difficult to obtain in complex environments. In this work, we propose a method for identifying the multimodality of the optimal trajectory distribution for safe navigation in crowded 3D environments without initial guidance. Our approach ensures safe trajectory generation by projecting sampled trajectories onto safe constraint sets and clustering them based on their potential to converge to the same locally optimal trajectory. This process naturally produces diverse trajectory options without requiring predefined initial guidance. Finally, for each trajectory cluster, we utilize the Model Predictive Path Integral framework to determine the optimal control input sequence, which corresponds to the local maxima of a multi-modal optimal trajectory distribution. We first validate our approach in simulations, achieving higher success rates than existing methods. Subsequent hardware experiments demonstrate that our fast local trajectory replanning strategy enables a drone to safely navigate crowded environments.
在拥挤环境中,由于意外障碍物和动态障碍物的存在,机器人需要频繁地更新自身的轨迹,因此对局部轨迹的快速重新规划是自主机器人确保安全导航的关键。在这种情况下,依靠单一轨迹优化可能无法提供足够的备选方案,难以快速切换到更安全的轨迹,从而增加了碰撞风险。虽然并行轨迹优化可以通过考虑多个候选点来解决这一限制,但它在很大程度上依赖于定义良好的初始制导,这在复杂环境中很难获得。在这项工作中,我们提出了一种在没有初始制导的拥挤三维环境中识别最优轨迹分布的多模态的方法。我们的方法通过将采样轨迹投影到安全约束集上,并根据它们收敛到相同的局部最优轨迹的潜力对它们进行聚类,从而确保安全的轨迹生成。这个过程自然会产生不同的轨迹选择,而不需要预定义的初始指导。最后,对于每个轨迹簇,利用模型预测路径积分框架确定最优控制输入序列,该序列对应于多模态最优轨迹分布的局部最大值。我们首先在模拟中验证了我们的方法,获得了比现有方法更高的成功率。随后的硬件实验表明,我们的快速局部轨迹重新规划策略使无人机能够安全地在拥挤的环境中导航。
{"title":"Safe Multimodal Replanning via Projection-Based Trajectory Clustering in Crowded Environments","authors":"Yongjae Lim;Seungwoo Jung;Dabin Kim;Dongjae Lee;H. Jin Kim","doi":"10.1109/LRA.2026.3656780","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656780","url":null,"abstract":"Fast replanning of the local trajectory is essential for autonomous robots to ensure safe navigation in crowded environments, as such environments require the robot to frequently update its trajectory due to unexpected and dynamic obstacles. In such settings, relying on the single trajectory optimization may not provide sufficient alternatives, making it harder to quickly switch to a safer trajectory and increasing the risk of collisions. While parallel trajectory optimization can address this limitation by considering multiple candidates, it depends heavily on well-defined initial guidance, which is difficult to obtain in complex environments. In this work, we propose a method for identifying the multimodality of the optimal trajectory distribution for safe navigation in crowded 3D environments without initial guidance. Our approach ensures safe trajectory generation by projecting sampled trajectories onto safe constraint sets and clustering them based on their potential to converge to the same locally optimal trajectory. This process naturally produces diverse trajectory options without requiring predefined initial guidance. Finally, for each trajectory cluster, we utilize the Model Predictive Path Integral framework to determine the optimal control input sequence, which corresponds to the local maxima of a multi-modal optimal trajectory distribution. We first validate our approach in simulations, achieving higher success rates than existing methods. Subsequent hardware experiments demonstrate that our fast local trajectory replanning strategy enables a drone to safely navigate crowded environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3558-3565"},"PeriodicalIF":5.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A&B-LO: Continuous-Time LiDAR Odometry With Adaptive Non-Uniform B-Spline Trajectory Representation 自适应非均匀b样条轨迹表示的连续时间激光雷达里程测量
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-21 DOI: 10.1109/LRA.2026.3656754
Yuchu Lu;Chenpeng Yao;Jiayuan Du;Chengju Liu;Qijun Chen
LiDAR odometry, fused by inertial measurement units (IMU), is an essential task in robotics navigation. Unlike the mainstream methods compensate the motion distortion of LiDAR data by high frequency inertial sensors, this letter deals with the distortion with continuous-time trajectory representation, and achieved competitive performance against state-of-the-art. We propose a compact framework of LiDAR odometry with adaptive non-uniform B-spline trajectory representation to formulate it as continuous-time estimation problem. We deploy point-to-plane registration and pseudo-velocity smoothing constraints to fully utilize geometric and kinematic information of odometry. For faster convergence of optimization, analytical Jacobian of constraints is derived to solve the non-linear least squares minimization. For more efficient B-spline representation, an adaptive knot spacing technique is proposed to adjust the time interval of control poses of spline. Extensive experiments on public and realistic datasets demonstrate validation and efficiency of our system compared with other LiDAR or LiDAR-inertial methods.
激光雷达测程是由惯性测量单元(IMU)融合而成的一项重要任务。与主流方法通过高频惯性传感器补偿激光雷达数据的运动畸变不同,本文采用连续时间轨迹表示来处理畸变,并取得了与最先进技术相媲美的性能。提出了一种具有自适应非均匀b样条轨迹表示的紧凑的激光雷达里程测量框架,将其表述为连续时间估计问题。我们利用点平面配准和伪速度平滑约束,充分利用里程计的几何和运动学信息。为了加快优化的收敛速度,导出了约束的解析雅可比矩阵来求解非线性最小二乘最小化问题。为了更有效地表示b样条,提出了一种自适应结点间距技术来调整样条控制位姿的时间间隔。在公共和现实数据集上进行的大量实验表明,与其他激光雷达或激光雷达-惯性方法相比,我们的系统是有效的。
{"title":"A&B-LO: Continuous-Time LiDAR Odometry With Adaptive Non-Uniform B-Spline Trajectory Representation","authors":"Yuchu Lu;Chenpeng Yao;Jiayuan Du;Chengju Liu;Qijun Chen","doi":"10.1109/LRA.2026.3656754","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656754","url":null,"abstract":"LiDAR odometry, fused by inertial measurement units (IMU), is an essential task in robotics navigation. Unlike the mainstream methods compensate the motion distortion of LiDAR data by high frequency inertial sensors, this letter deals with the distortion with continuous-time trajectory representation, and achieved competitive performance against state-of-the-art. We propose a compact framework of LiDAR odometry with adaptive non-uniform B-spline trajectory representation to formulate it as continuous-time estimation problem. We deploy point-to-plane registration and pseudo-velocity smoothing constraints to fully utilize geometric and kinematic information of odometry. For faster convergence of optimization, analytical Jacobian of constraints is derived to solve the non-linear least squares minimization. For more efficient B-spline representation, an adaptive knot spacing technique is proposed to adjust the time interval of control poses of spline. Extensive experiments on public and realistic datasets demonstrate validation and efficiency of our system compared with other LiDAR or LiDAR-inertial methods.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3550-3557"},"PeriodicalIF":5.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and Validation of Docking-Based Cooperative Strategies for Snake Robots in Complex Environments 复杂环境下基于对接的蛇形机器人协同策略设计与验证
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-21 DOI: 10.1109/LRA.2026.3656725
Xuan Xiao;Kefeng Zhang;Jiaqi Zhu;Jianming Wang;Runtian Zhu
Snake robots exhibit remarkable locomotion capabilities in complex environments as the degrees of freedom (DOFs) increase, but at the cost of energy consumption. To address this issue, this article proposes a cooperation strategy for snake robots based on a head-tail docking mechanism, which allows multiple short snake robots to combine into a longer one, enabling the execution of complex tasks. The mechanical design and the implementation of the dockable snake robots are introduced, featuring passive docking mechanisms at both the head and tail, an embedded controller and a vision camera mounted on the head, and a distributed power supply system. Furthermore, the control strategies for the combined robots have been developed to perform the crawler gait and the motion of spanning between parallel pipes. As a result, experiments are conducted to demonstrate the feasibility and performance of the proposed docking mechanism and cooperative control methods. Specifically, two snake robots can autonomously dock under visual guidance. After docking, the combined robots can rapidly traverse flat surfaces by performing the crawler gait at an average speed of 0.168 m/s. Additionally, the robots can perform spanning between parallel pipes and pipe inspection tasks concurrently by separating.
随着自由度的增加,蛇形机器人在复杂环境中表现出卓越的运动能力,但这是以能量消耗为代价的。针对这一问题,本文提出了一种基于头尾对接机制的蛇形机器人协作策略,该策略允许多个较短的蛇形机器人组合成一个较长的蛇形机器人,从而能够执行复杂的任务。介绍了可停靠蛇机器人的机械设计与实现,其头部和尾部均采用被动对接机构,头部安装嵌入式控制器和视觉摄像头,并采用分布式供电系统。在此基础上,研究了组合机器人的爬行步态和跨管运动控制策略。最后,通过实验验证了所提出的对接机制和协同控制方法的可行性和性能。具体来说,两个蛇形机器人可以在视觉引导下自主对接。对接后,组合机器人能够以平均速度0.168 m/s的履带式步态快速穿越平坦表面。此外,机器人可以通过分离的方式同时完成平行管道之间的跨越和管道检测任务。
{"title":"Design and Validation of Docking-Based Cooperative Strategies for Snake Robots in Complex Environments","authors":"Xuan Xiao;Kefeng Zhang;Jiaqi Zhu;Jianming Wang;Runtian Zhu","doi":"10.1109/LRA.2026.3656725","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656725","url":null,"abstract":"Snake robots exhibit remarkable locomotion capabilities in complex environments as the degrees of freedom (DOFs) increase, but at the cost of energy consumption. To address this issue, this article proposes a cooperation strategy for snake robots based on a head-tail docking mechanism, which allows multiple short snake robots to combine into a longer one, enabling the execution of complex tasks. The mechanical design and the implementation of the dockable snake robots are introduced, featuring passive docking mechanisms at both the head and tail, an embedded controller and a vision camera mounted on the head, and a distributed power supply system. Furthermore, the control strategies for the combined robots have been developed to perform the crawler gait and the motion of spanning between parallel pipes. As a result, experiments are conducted to demonstrate the feasibility and performance of the proposed docking mechanism and cooperative control methods. Specifically, two snake robots can autonomously dock under visual guidance. After docking, the combined robots can rapidly traverse flat surfaces by performing the crawler gait at an average speed of 0.168 m/s. Additionally, the robots can perform spanning between parallel pipes and pipe inspection tasks concurrently by separating.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3190-3197"},"PeriodicalIF":5.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VINS-Mah: A Robust Monocular Visual-Inertial State Estimator for Dynamic Environments 动态环境下稳健的单目视觉惯性状态估计器
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-21 DOI: 10.1109/LRA.2026.3656724
Yuquan Hu;Alessandro Gardi
Conventional Visual-Inertial Navigation Systems (VINS) are developed under the assumption of static environments, leading to significant performance degradation in dynamic scenarios. In recent years, many dynamic-feature-aware VINS implementations have been proposed, but most of them rely on prior semantic information and lack generalizability. To address these limitations, we propose a robust monocular method called VINS-Mah, which is capable of identifying both dynamic and unreliable features without prior semantic information. First, the covariances related to the feature reprojection errors are computed via the proposed uncertainty estimator. Subsequently, a dynamic feature filter module combines the feature reprojection errors and the computed covariances to determine the Mahalanobis distance, and then applies a Chi-square test to filter out dynamic features. The proposed method is verified against several publicly available datasets, covering both simulated and real-world scenes. Experimental results demonstrate that VINS-Mah outperforms other state-of-the-art methods in dynamic scenarios, while not degrading in static environments.
传统的视觉惯性导航系统(VINS)是在静态环境下开发的,在动态环境下会导致系统性能显著下降。近年来,人们提出了许多动态特征感知的VINS实现方法,但它们大多依赖于先验语义信息,缺乏泛化能力。为了解决这些限制,我们提出了一种称为vin - mah的鲁棒单目方法,它能够在没有先验语义信息的情况下识别动态和不可靠的特征。首先,通过提出的不确定性估计器计算与特征重投影误差相关的协方差。然后,动态特征滤波模块结合特征重投影误差和计算协方差确定马氏距离,然后使用卡方检验过滤出动态特征。针对几个公开可用的数据集,包括模拟和现实场景,验证了所提出的方法。实验结果表明,VINS-Mah在动态环境下优于其他最先进的方法,而在静态环境下不会下降。
{"title":"VINS-Mah: A Robust Monocular Visual-Inertial State Estimator for Dynamic Environments","authors":"Yuquan Hu;Alessandro Gardi","doi":"10.1109/LRA.2026.3656724","DOIUrl":"https://doi.org/10.1109/LRA.2026.3656724","url":null,"abstract":"Conventional Visual-Inertial Navigation Systems (VINS) are developed under the assumption of static environments, leading to significant performance degradation in dynamic scenarios. In recent years, many dynamic-feature-aware VINS implementations have been proposed, but most of them rely on prior semantic information and lack generalizability. To address these limitations, we propose a robust monocular method called VINS-Mah, which is capable of identifying both dynamic and unreliable features without prior semantic information. First, the covariances related to the feature reprojection errors are computed via the proposed uncertainty estimator. Subsequently, a dynamic feature filter module combines the feature reprojection errors and the computed covariances to determine the Mahalanobis distance, and then applies a Chi-square test to filter out dynamic features. The proposed method is verified against several publicly available datasets, covering both simulated and real-world scenes. Experimental results demonstrate that VINS-Mah outperforms other state-of-the-art methods in dynamic scenarios, while not degrading in static environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3614-3621"},"PeriodicalIF":5.3,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11359677","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OccTENS: 3D Occupancy World Model via Temporal Next-Scale Prediction OccTENS:基于时间下尺度预测的三维占用世界模型
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-19 DOI: 10.1109/LRA.2026.3655202
Bu Jin;Songen Gu;Xiaotao Hu;Yupeng Zheng;Xiaoyang Guo;Qian Zhang;Xiaoxiao Long;Wei Yin
In this paper, we propose OccTENS, a generative occupancy world model that enables controllable, high-fidelity long-term occupancy generation while maintaining computational efficiency. Different from visual generation, the occupancy world model must capture the fine-grained 3D geometry and dynamic evolution of the 3D scenes, posing great challenges for the generative models. Recent approaches based on autoregression (AR) have demonstrated the potential to predict vehicle movement and future occupancy scenes simultaneously from historical observations, but they typically suffer from inefficiency, temporal degradation in long-term generation and lack of controllability. To holistically address these issues, we reformulate the occupancy world model as a temporal next-scale prediction (TENS) task, which decomposes the temporal sequence modeling problem into the modeling of spatial scale-by-scale generation and temporal scene-by-scene prediction. With a TensFormer, OccTENS can effectively manage the temporal causality and spatial relationships of occupancy sequences in a flexible and scalable way. To enhance the pose controllability, we further propose a holistic pose aggregation strategy, which features a unified sequence modeling for occupancy and ego-motion. Experiments show that OccTENS outperforms the state-of-the-art method with both higher occupancy quality and faster inference time.
在本文中,我们提出了OccTENS,这是一个生成占用世界模型,可以在保持计算效率的同时实现可控、高保真的长期占用生成。与视觉生成不同,占用世界模型必须捕捉3D场景的细粒度三维几何和动态演化,这对生成模型提出了很大的挑战。最近基于自回归(AR)的方法已经证明了从历史观察同时预测车辆运动和未来占用场景的潜力,但它们通常存在效率低下、长期生成时间退化和缺乏可控性的问题。为了从整体上解决这些问题,我们将占用世界模型重新定义为时间下尺度预测(TENS)任务,将时间序列建模问题分解为空间逐尺度生成建模和时间逐场景预测。利用TensFormer, OccTENS可以灵活、可扩展地有效管理占用序列的时间因果关系和空间关系。为了增强姿态的可控性,我们进一步提出了一种整体姿态聚合策略,该策略以占用和自我运动的统一序列建模为特征。实验表明,OccTENS在占用质量和推理时间上都优于现有的方法。
{"title":"OccTENS: 3D Occupancy World Model via Temporal Next-Scale Prediction","authors":"Bu Jin;Songen Gu;Xiaotao Hu;Yupeng Zheng;Xiaoyang Guo;Qian Zhang;Xiaoxiao Long;Wei Yin","doi":"10.1109/LRA.2026.3655202","DOIUrl":"https://doi.org/10.1109/LRA.2026.3655202","url":null,"abstract":"In this paper, we propose OccTENS, a generative occupancy world model that enables controllable, high-fidelity long-term occupancy generation while maintaining computational efficiency. Different from visual generation, the occupancy world model must capture the fine-grained 3D geometry and dynamic evolution of the 3D scenes, posing great challenges for the generative models. Recent approaches based on autoregression (AR) have demonstrated the potential to predict vehicle movement and future occupancy scenes simultaneously from historical observations, but they typically suffer from <bold>inefficiency</b>, <bold>temporal degradation</b> in long-term generation and <bold>lack of controllability</b>. To holistically address these issues, we reformulate the occupancy world model as a temporal next-scale prediction (TENS) task, which decomposes the temporal sequence modeling problem into the modeling of spatial scale-by-scale generation and temporal scene-by-scene prediction. With a <bold>TensFormer</b>, OccTENS can effectively manage the temporal causality and spatial relationships of occupancy sequences in a flexible and scalable way. To enhance the pose controllability, we further propose a holistic pose aggregation strategy, which features a unified sequence modeling for occupancy and ego-motion. Experiments show that OccTENS outperforms the state-of-the-art method with both higher occupancy quality and faster inference time.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3566-3573"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design and NMPC-Based Control of a Hybrid Sprawl-Tuned Vehicle With Flying and Driving Modes 具有飞行和驾驶模式的混合动力汽车的设计与nmpc控制
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-19 DOI: 10.1109/LRA.2026.3655281
Haoyu Wang;Zhiqiang Miao;Weiwei Zhan;Xiangke Wang;Wei He;Yaonan Wang
To address the limited maneuverability and low energy efficiency of autonomous aerial vehicles (AAVs) in confined spaces, we design and implement the Hybrid Sprawl-Tuned Vehicle (HSTV) - a deformable multi-modal robotic platform specifically engineered for operation in complex and spatially constrained environments. Based on the “FSTAR” platform, HSTV is equipped with passive front wheels and actively driven rear wheels. The gear transmission mechanism enables the actively driven wheels to be driven without the need for dedicated motors, simplifying the architecture of the system. For both flying and driving modes, detailed kinematics and dynamics models integrated with a mode switching strategy are constructed by using the Newton-Euler method. Based on the developed models, the constrained nonlinear model predictive controller is designed to achieve the accurate motion performance in flying and driving mode. Comprehensive experimental results and comparative analysis demonstrate that HSTV achieves significant trajectory tracking accuracy across both flying and driving modes, while saving energy by up to 70.9% with no significantly increasing structural complexity (maintained at 98.6% simplicity).
为了解决自主飞行器(aav)在密闭空间中有限的机动性和低能效问题,我们设计并实现了混合动力扩展调谐车辆(HSTV),这是一种可变形的多模态机器人平台,专门用于在复杂和空间受限的环境中运行。HSTV基于“FSTAR”平台,采用被动前轮和主动后轮。齿轮传动机构使主动从动轮无需专用电机即可驱动,简化了系统的结构。针对飞行和驾驶两种模式,采用牛顿-欧拉方法建立了包含模式切换策略的详细运动学和动力学模型。基于所建立的模型,设计了约束非线性模型预测控制器,以实现飞行和驱动模式下的精确运动性能。综合实验结果和对比分析表明,HSTV在飞行和驾驶两种模式下都实现了显著的轨迹跟踪精度,节能高达70.9%,且结构复杂度保持在98.6%不变。
{"title":"Design and NMPC-Based Control of a Hybrid Sprawl-Tuned Vehicle With Flying and Driving Modes","authors":"Haoyu Wang;Zhiqiang Miao;Weiwei Zhan;Xiangke Wang;Wei He;Yaonan Wang","doi":"10.1109/LRA.2026.3655281","DOIUrl":"https://doi.org/10.1109/LRA.2026.3655281","url":null,"abstract":"To address the limited maneuverability and low energy efficiency of autonomous aerial vehicles (AAVs) in confined spaces, we design and implement the Hybrid Sprawl-Tuned Vehicle (HSTV) - a deformable multi-modal robotic platform specifically engineered for operation in complex and spatially constrained environments. Based on the “FSTAR” platform, HSTV is equipped with passive front wheels and actively driven rear wheels. The gear transmission mechanism enables the actively driven wheels to be driven without the need for dedicated motors, simplifying the architecture of the system. For both flying and driving modes, detailed kinematics and dynamics models integrated with a mode switching strategy are constructed by using the Newton-Euler method. Based on the developed models, the constrained nonlinear model predictive controller is designed to achieve the accurate motion performance in flying and driving mode. Comprehensive experimental results and comparative analysis demonstrate that HSTV achieves significant trajectory tracking accuracy across both flying and driving modes, while saving energy by up to 70.9% with no significantly increasing structural complexity (maintained at 98.6% simplicity).","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3222-3229"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safety-Critical Steering Control for Rubber-Tired Container Gantry Cranes: A State-Interlocked CBF Approach 橡胶轮胎集装箱门式起重机安全关键转向控制:一种状态互锁CBF方法
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-19 DOI: 10.1109/LRA.2026.3655311
Cong Li;Qin Rao;Zheng Tian;Jun Yang
The rubber-tired container gantry crane (RTG) is a type of heavy-duty lifting equipment commonly used in container yards, which is driven by two-side rubber tires and steered via differential drive. While moving along the desired path, the RTG must remain centered of the lane with restricted heading angle, as deviations may compromise the safety of subsequent yard operations. Due to its underactuated nature and the presence of external disturbances, achieving accurate lane-keeping poses a significant control challenge. To address this issue, a robust safety-critical steering control strategy integrating disturbance rejection vector field (VF) with a new state-interlocked control barrier function (SICBF) is proposed. The strategy initially employs a VF path-following method as the nominal controller. By strategically shrinking the safe set, the SICBF overcomes the limitations of traditional CBFs, such as state coupling in the inequality verification and infeasibility when the control coefficient tends to zero. Furthermore, by incorporating a disturbance observer (DOB) into the quadratic programming (QP) framework, the robustness and safety of the control system are significantly enhanced. Comprehensive simulation and experiment are conducted on a practical RTG with a 40-ton load capacity. To our best knowledge, the proposed method is one of the very few methods that have demonstrated successful application to the practical RTG systems.
橡胶轮胎集装箱门式起重机(RTG)是集装箱堆场常用的一种重型起重设备,它采用双面橡胶轮胎驱动,通过差动传动进行转向。在沿着预期路径移动时,RTG必须保持在车道的中心,并限制航向角,因为偏离可能会危及后续堆场操作的安全。由于其欠驱动特性和外部干扰的存在,实现准确的车道保持提出了重大的控制挑战。为了解决这一问题,提出了一种将干扰抑制向量场(VF)与一种新的状态互锁控制屏障函数(SICBF)相结合的鲁棒安全关键转向控制策略。该策略最初采用VF路径跟踪方法作为标称控制器。通过有策略地缩小安全集,SICBF克服了传统CBFs在不等式验证中存在状态耦合和控制系数趋于零时不可行的局限性。此外,通过将扰动观测器(DOB)引入二次规划(QP)框架,控制系统的鲁棒性和安全性得到了显著提高。在40吨载重的实际RTG上进行了综合仿真和试验。据我们所知,所提出的方法是极少数已经证明成功应用于实际RTG系统的方法之一。
{"title":"Safety-Critical Steering Control for Rubber-Tired Container Gantry Cranes: A State-Interlocked CBF Approach","authors":"Cong Li;Qin Rao;Zheng Tian;Jun Yang","doi":"10.1109/LRA.2026.3655311","DOIUrl":"https://doi.org/10.1109/LRA.2026.3655311","url":null,"abstract":"The rubber-tired container gantry crane (RTG) is a type of heavy-duty lifting equipment commonly used in container yards, which is driven by two-side rubber tires and steered via differential drive. While moving along the desired path, the RTG must remain centered of the lane with restricted heading angle, as deviations may compromise the safety of subsequent yard operations. Due to its underactuated nature and the presence of external disturbances, achieving accurate lane-keeping poses a significant control challenge. To address this issue, a robust safety-critical steering control strategy integrating disturbance rejection vector field (VF) with a new state-interlocked control barrier function (SICBF) is proposed. The strategy initially employs a VF path-following method as the nominal controller. By strategically shrinking the safe set, the SICBF overcomes the limitations of traditional CBFs, such as state coupling in the inequality verification and infeasibility when the control coefficient tends to zero. Furthermore, by incorporating a disturbance observer (DOB) into the quadratic programming (QP) framework, the robustness and safety of the control system are significantly enhanced. Comprehensive simulation and experiment are conducted on a practical RTG with a 40-ton load capacity. To our best knowledge, the proposed method is one of the very few methods that have demonstrated successful application to the practical RTG systems.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3238-3245"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Robotics and Automation Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1