首页 > 最新文献

IEEE Robotics and Automation Letters最新文献

英文 中文
Azimuth-LIO: Robust LiDAR-Inertial Odometry via Azimuth-Aware Voxelization and Probabilistic Fusion 方位角- lio:基于方位角感知体素化和概率融合的鲁棒激光雷达-惯性里程计
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-19 DOI: 10.1109/LRA.2026.3655291
Zhongguan Liu;Wei Li;Honglei Che;Lu Pan;Shuaidong Yuan
Voxel-based LiDAR–inertial odometry (LIO) is accurate and efficient but can suffer from geometric inconsistencies when single-Gaussian voxel models indiscriminately merge observations from conflicting viewpoints. To address this limitation, we propose Azimuth-LIO, a robust voxel-based LIO framework that leverages azimuth-aware voxelization and probabilistic fusion. Instead of using a single distribution per voxel, we discretize each voxel into azimuth-sectorized substructures, each modeled by an anisotropic 3D Gaussian to preserve viewpoint-specific spatial features and uncertainties. We further introduce a direction-weighted distribution-to-distribution registration metric to adaptively quantify the contributions of different azimuth sectors, followed by a Bayesian fusion framework that exploits these confidence weights to ensure azimuth-consistent map updates. The performance and efficiency of the proposed method are evaluated on public benchmarks including the M2DGR, MCD, and SubT-MRS datasets, demonstrating superior accuracy and robustness compared to existing voxel-based algorithms.
基于体素的LiDAR-inertial odometry (LIO)准确高效,但当单高斯体素模型不加选择地合并来自冲突视点的观测结果时,可能会出现几何不一致性。为了解决这一限制,我们提出了Azimuth-LIO,这是一种鲁棒的基于体素的LIO框架,利用了方位感知体素化和概率融合。我们不再使用每个体素的单一分布,而是将每个体素离散为方位角分割的子结构,每个子结构都由各向异性3D高斯建模,以保留特定于视点的空间特征和不确定性。我们进一步引入了一个方向加权分布到分布的配准度量,以自适应地量化不同方位角扇区的贡献,然后使用贝叶斯融合框架利用这些置信度权重来确保方位角一致的地图更新。在M2DGR、MCD和SubT-MRS等公共基准数据集上对所提出方法的性能和效率进行了评估,与现有的基于体素的算法相比,显示出更高的准确性和鲁棒性。
{"title":"Azimuth-LIO: Robust LiDAR-Inertial Odometry via Azimuth-Aware Voxelization and Probabilistic Fusion","authors":"Zhongguan Liu;Wei Li;Honglei Che;Lu Pan;Shuaidong Yuan","doi":"10.1109/LRA.2026.3655291","DOIUrl":"https://doi.org/10.1109/LRA.2026.3655291","url":null,"abstract":"Voxel-based LiDAR–inertial odometry (LIO) is accurate and efficient but can suffer from geometric inconsistencies when single-Gaussian voxel models indiscriminately merge observations from conflicting viewpoints. To address this limitation, we propose Azimuth-LIO, a robust voxel-based LIO framework that leverages azimuth-aware voxelization and probabilistic fusion. Instead of using a single distribution per voxel, we discretize each voxel into azimuth-sectorized substructures, each modeled by an anisotropic 3D Gaussian to preserve viewpoint-specific spatial features and uncertainties. We further introduce a direction-weighted distribution-to-distribution registration metric to adaptively quantify the contributions of different azimuth sectors, followed by a Bayesian fusion framework that exploits these confidence weights to ensure azimuth-consistent map updates. The performance and efficiency of the proposed method are evaluated on public benchmarks including the M2DGR, MCD, and SubT-MRS datasets, demonstrating superior accuracy and robustness compared to existing voxel-based algorithms.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3158-3165"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DiffPF: Differentiable Particle Filtering With Generative Sampling via Conditional Diffusion Models DiffPF:通过条件扩散模型生成采样的可微粒子滤波
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-19 DOI: 10.1109/LRA.2026.3655302
Ziyu Wan;Lin Zhao
This paper proposes DiffPF, a differentiable particle filter that leverages diffusion models for state estimation in dynamic systems. Unlike conventional differentiable particle filters, which require importance weighting and typically rely on predefined or low-capacity proposal distributions, DiffPF learns a flexible posterior sampler by conditioning a diffusion model on predicted particles and the current observation. This enables accurate, equally-weighted sampling from complex, high-dimensional, and multimodal filtering distributions. We evaluate DiffPF across a range of scenarios, including both unimodal and highly multimodal distributions, and test it on simulated as well as real-world tasks, where it consistently outperforms existing filtering baselines. In particular, DiffPF achieves a 90.3% improvement in estimation accuracy on a highly multimodal global localization benchmark, and a nearly 50% improvement on the real-world robotic manipulation benchmark, compared to state-of-the-art differentiable filters. To the best of our knowledge, DiffPF is the first method to integrate conditional diffusion models into particle filtering, enabling high-quality posterior sampling that produces more informative particles and significantly improves state estimation. The code is available at https://github.com/ZiyuNUS/DiffPF.
本文提出了一种利用扩散模型进行动态系统状态估计的可微粒子滤波器DiffPF。传统的可微粒子滤波器需要重要性加权,通常依赖于预定义的或低容量的建议分布,而DiffPF通过对预测粒子和当前观测值的扩散模型进行调节来学习灵活的后验采样器。这使得从复杂、高维和多模态滤波分布中进行精确、等加权采样成为可能。我们在一系列场景中评估DiffPF,包括单模态和高度多模态分布,并在模拟和现实世界的任务中对其进行测试,在这些任务中,它始终优于现有的过滤基线。特别是,与最先进的可微滤波器相比,DiffPF在高度多模态全球定位基准上的估计精度提高了90.3%,在现实世界的机器人操作基准上提高了近50%。据我们所知,DiffPF是第一个将条件扩散模型集成到粒子滤波中的方法,实现了高质量的后验采样,产生了更多信息的粒子,并显著改善了状态估计。代码可在https://github.com/ZiyuNUS/DiffPF上获得。
{"title":"DiffPF: Differentiable Particle Filtering With Generative Sampling via Conditional Diffusion Models","authors":"Ziyu Wan;Lin Zhao","doi":"10.1109/LRA.2026.3655302","DOIUrl":"https://doi.org/10.1109/LRA.2026.3655302","url":null,"abstract":"This paper proposes DiffPF, a <italic>differentiable</i> particle filter that leverages <italic>diffusion</i> models for state estimation in dynamic systems. Unlike conventional differentiable particle filters, which require importance weighting and typically rely on predefined or low-capacity proposal distributions, DiffPF learns a flexible posterior sampler by conditioning a diffusion model on predicted particles and the current observation. This enables accurate, equally-weighted sampling from complex, high-dimensional, and multimodal filtering distributions. We evaluate DiffPF across a range of scenarios, including both unimodal and highly multimodal distributions, and test it on simulated as well as real-world tasks, where it consistently outperforms existing filtering baselines. In particular, DiffPF achieves a 90.3% improvement in estimation accuracy on a highly multimodal global localization benchmark, and a nearly 50% improvement on the real-world robotic manipulation benchmark, compared to state-of-the-art differentiable filters. To the best of our knowledge, DiffPF is the first method to integrate conditional diffusion models into particle filtering, enabling high-quality posterior sampling that produces more informative particles and significantly improves state estimation. The code is available at <uri>https://github.com/ZiyuNUS/DiffPF</uri>.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3166-3173"},"PeriodicalIF":5.3,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gentle Manipulation of Long-Horizon Tasks Without Human Demonstrations 在没有人类示范的情况下,温和地操纵长期任务
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-15 DOI: 10.1109/LRA.2026.3653406
Jiayu Zhou;Qiwei Wu;Haitao Jiang;Xuanbao Qin;Yunjiang Lou;Xiaogang Xiong;Renjing Xu
In the field of robotic manipulation, traditional methods lack the flexibility required to meet the demands of diverse applications. Consequently, researchers have increasingly focused on developing more general techniques, particularly for long-horizon and gentle manipulation, to enhance the manipulation ability and adaptability of robots. In this study, we propose a framework called VLM-Driven Atomic Skills with Diffusion Policy Distillation (VASK-DP), which integrates tactile sensing to enable gentle control of robotic arms in long-horizon tasks. The framework trains atomic manipulation skills through reinforcement learning in simulated environments. The Visual Language Model (VLM) interprets RGB observations and natural language instructions to select and sequence atomic skills, guiding task decomposition, skill switching, and execution. It also generates expert demonstration datasets that serve as the basis for imitation learning. Subsequently, compliant long-horizon manipulation policies are distilled from these demonstrations using diffusion-based imitation learning. We evaluate multiple control modes, distillation strategies, and decision frameworks. Quantitative results across diverse simulation environments and long-horizon tasks validate the effectiveness of our approach. Furthermore, real robot deployment demonstrates successful task execution on physical hardware.
在机器人操作领域,传统的方法缺乏灵活性来满足不同应用的需求。因此,为了提高机器人的操作能力和适应能力,研究人员越来越关注于开发更通用的技术,特别是长视界和温和的操作技术。在这项研究中,我们提出了一个名为vlm驱动的原子技能扩散策略蒸馏(VASK-DP)的框架,该框架集成了触觉传感,能够在长视距任务中对机械臂进行温和控制。该框架通过在模拟环境中强化学习来训练原子操作技能。可视化语言模型(VLM)解释RGB观察结果和自然语言指令,以选择和排序原子技能,指导任务分解、技能切换和执行。它还生成专家演示数据集,作为模仿学习的基础。随后,使用基于扩散的模仿学习从这些演示中提取出兼容的长视界操作策略。我们评估了多种控制模式、蒸馏策略和决策框架。不同模拟环境和长期任务的定量结果验证了我们方法的有效性。此外,真实的机器人部署演示了在物理硬件上成功执行任务。
{"title":"Gentle Manipulation of Long-Horizon Tasks Without Human Demonstrations","authors":"Jiayu Zhou;Qiwei Wu;Haitao Jiang;Xuanbao Qin;Yunjiang Lou;Xiaogang Xiong;Renjing Xu","doi":"10.1109/LRA.2026.3653406","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653406","url":null,"abstract":"In the field of robotic manipulation, traditional methods lack the flexibility required to meet the demands of diverse applications. Consequently, researchers have increasingly focused on developing more general techniques, particularly for long-horizon and gentle manipulation, to enhance the manipulation ability and adaptability of robots. In this study, we propose a framework called VLM-Driven Atomic Skills with Diffusion Policy Distillation (VASK-DP), which integrates tactile sensing to enable gentle control of robotic arms in long-horizon tasks. The framework trains atomic manipulation skills through reinforcement learning in simulated environments. The Visual Language Model (VLM) interprets RGB observations and natural language instructions to select and sequence atomic skills, guiding task decomposition, skill switching, and execution. It also generates expert demonstration datasets that serve as the basis for imitation learning. Subsequently, compliant long-horizon manipulation policies are distilled from these demonstrations using diffusion-based imitation learning. We evaluate multiple control modes, distillation strategies, and decision frameworks. Quantitative results across diverse simulation environments and long-horizon tasks validate the effectiveness of our approach. Furthermore, real robot deployment demonstrates successful task execution on physical hardware.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"2538-2545"},"PeriodicalIF":5.3,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146001876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AING-SLAM: Accurate Implicit Neural Geometry-Aware SLAM With Appearance and Semantics via History-Guided Optimization 基于历史导向优化的精确隐式神经几何感知SLAM
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-14 DOI: 10.1109/LRA.2026.3653380
Yanan Hao;Chenhui Shi;Pengju Zhang;Fulin Tang;Yihong Wu
In range-based SLAM systems, localization accuracy depends on the quality of geometric maps. Sparse LiDAR scans and noisy depth from RGB-D sensors often yield incomplete or inaccurate reconstructions that degrade pose estimation. Appearance and semantic cues, readily available from onboard RGB and pretrained models, can serve as complementary signals to strengthen geometry. Nevertheless, variations in appearance due to illumination or texture and inconsistencies in semantic labels across frames can hinder geometric optimization if directly used as supervision. To address these challenges, we propose AING-SLAM, an Accurate Implicit Neural Geometry-aware SLAM framework that allows appearance and semantics to effectively strengthen geometry in both mapping and odometry. A unified neural point representation with a lightweight cross-modal decoder integrates geometry, appearance and semantics, enabling auxiliary cues to refine geometry even in sparse or ambiguous regions. For pose tracking, appearance-semantic-aided odometry jointly minimizes SDF, appearance, and semantic residuals with adaptive weighting, improving scan-to-map alignment and reducing drift. To safeguard stability, a history-guided gradient fusion strategy aligns instantaneous updates with long-term optimization trends, mitigating occasional inconsistencies between appearance/semantic cues and SDF-based supervision, thereby strengthening geometric optimization. Extensive experiments on indoor RGB-D and outdoor LiDAR benchmarks demonstrate real-time performance, state-of-the-art localization accuracy, and high-fidelity reconstruction across diverse environments.
在基于距离的SLAM系统中,定位精度取决于几何地图的质量。来自RGB-D传感器的稀疏激光雷达扫描和噪声深度通常会产生不完整或不准确的重建,从而降低姿态估计。从机载RGB和预训练模型中随时可用的外观和语义线索可以作为补充信号来增强几何形状。然而,由于光照或纹理引起的外观变化以及跨框架语义标签的不一致,如果直接用作监督,可能会阻碍几何优化。为了解决这些挑战,我们提出了一个精确的隐式神经几何感知SLAM框架,它允许外观和语义在映射和里程计中有效地加强几何。统一的神经点表示与轻量级的跨模态解码器集成几何,外观和语义,使辅助线索细化几何,即使在稀疏或模糊的区域。对于姿态跟踪,外观-语义辅助里程法通过自适应加权共同最小化SDF,外观和语义残差,改善扫描到地图的对齐并减少漂移。为了保证稳定性,历史导向的梯度融合策略将即时更新与长期优化趋势结合起来,减轻了外观/语义线索与基于sdf的监督之间偶尔的不一致,从而加强了几何优化。在室内RGB-D和室外激光雷达基准测试中进行的大量实验证明了在不同环境下的实时性能、最先进的定位精度和高保真重建。
{"title":"AING-SLAM: Accurate Implicit Neural Geometry-Aware SLAM With Appearance and Semantics via History-Guided Optimization","authors":"Yanan Hao;Chenhui Shi;Pengju Zhang;Fulin Tang;Yihong Wu","doi":"10.1109/LRA.2026.3653380","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653380","url":null,"abstract":"In range-based SLAM systems, localization accuracy depends on the quality of geometric maps. Sparse LiDAR scans and noisy depth from RGB-D sensors often yield incomplete or inaccurate reconstructions that degrade pose estimation. Appearance and semantic cues, readily available from onboard RGB and pretrained models, can serve as complementary signals to strengthen geometry. Nevertheless, variations in appearance due to illumination or texture and inconsistencies in semantic labels across frames can hinder geometric optimization if directly used as supervision. To address these challenges, we propose <bold>AING-SLAM</b>, an Accurate Implicit Neural Geometry-aware SLAM framework that allows appearance and semantics to effectively strengthen geometry in both mapping and odometry. A unified neural point representation with a lightweight cross-modal decoder integrates geometry, appearance and semantics, enabling auxiliary cues to refine geometry even in sparse or ambiguous regions. For pose tracking, appearance-semantic-aided odometry jointly minimizes SDF, appearance, and semantic residuals with adaptive weighting, improving scan-to-map alignment and reducing drift. To safeguard stability, a history-guided gradient fusion strategy aligns instantaneous updates with long-term optimization trends, mitigating occasional inconsistencies between appearance/semantic cues and SDF-based supervision, thereby strengthening geometric optimization. Extensive experiments on indoor RGB-D and outdoor LiDAR benchmarks demonstrate real-time performance, state-of-the-art localization accuracy, and high-fidelity reconstruction across diverse environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"2594-2601"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lagrangian Neural Network-Based Control: Improving Robotic Trajectory Tracking via Linearized Feedback 基于拉格朗日神经网络的控制:通过线性反馈改善机器人轨迹跟踪
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-14 DOI: 10.1109/LRA.2026.3653326
Manuel Weiss;Alexander Pawluchin;Jan-Hendrik Ewering;Thomas Seel;Ivo Boblan
This letter introduces a control framework that leverages Lagrangian neural network (LNN) for computed torque control (CTC) of robotic systems with unknown dynamics. Unlike prior LNN-based controllers that are placed outside the feedback-linearization framework (e.g., feedforward), we embed an LNN inverse-dynamics model within a CTC loop, thereby shaping the closed-loop error dynamics. This strategy, referred to as LNN-CTC, ensures a physically consistent model and improves extrapolation, requiring neither prior model knowledge nor extensive training data. The approach is experimentally validated on a robotic arm with four degrees of freedom and compared with conventional model-based CTC, physics-informed neural network (PINN)-CTC, deep neural network (DNN)-CTC, an LNN-based feedforward controller, and a PID controller. Results demonstrate that LNN-CTC significantly outperforms model-based baselines by up to $30 ,%$ in tracking accuracy, achieving high performance with minimal training data. In addition, LNN-CTC outperforms all other evaluated baselines in both tracking accuracy and data efficiency, attaining lower joint-space RMSE for the same training data. The findings highlight the potential of physics-informed neural architectures to generalize robustly across various operating conditions and contribute to narrowing the performance gap between learned and classical control strategies.
本文介绍了一种利用拉格朗日神经网络(LNN)对未知动力学的机器人系统进行计算扭矩控制(CTC)的控制框架。与之前基于LNN的控制器放置在反馈线性化框架之外(例如,前馈)不同,我们在CTC环路中嵌入LNN逆动力学模型,从而形成闭环误差动态。这种策略被称为LNN-CTC,它确保了物理上一致的模型,并改进了外推,既不需要先验的模型知识,也不需要大量的训练数据。该方法在四自由度机械臂上进行了实验验证,并与传统的基于模型的CTC、物理信息神经网络(PINN)-CTC、深度神经网络(DNN)-CTC、基于lnn的前馈控制器和PID控制器进行了比较。结果表明,LNN-CTC在跟踪精度上显著优于基于模型的基线,最高可达30 %,以最少的训练数据实现高性能。此外,LNN-CTC在跟踪精度和数据效率方面优于所有其他评估基线,对于相同的训练数据获得更低的关节空间RMSE。研究结果强调了基于物理的神经结构在各种操作条件下的强大泛化潜力,并有助于缩小学习控制策略与经典控制策略之间的性能差距。
{"title":"Lagrangian Neural Network-Based Control: Improving Robotic Trajectory Tracking via Linearized Feedback","authors":"Manuel Weiss;Alexander Pawluchin;Jan-Hendrik Ewering;Thomas Seel;Ivo Boblan","doi":"10.1109/LRA.2026.3653326","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653326","url":null,"abstract":"This letter introduces a control framework that leverages Lagrangian neural network (LNN) for computed torque control (CTC) of robotic systems with unknown dynamics. Unlike prior LNN-based controllers that are placed outside the feedback-linearization framework (e.g., feedforward), we embed an LNN inverse-dynamics model within a CTC loop, thereby shaping the closed-loop error dynamics. This strategy, referred to as LNN-CTC, ensures a physically consistent model and improves extrapolation, requiring neither prior model knowledge nor extensive training data. The approach is experimentally validated on a robotic arm with four degrees of freedom and compared with conventional model-based CTC, physics-informed neural network (PINN)-CTC, deep neural network (DNN)-CTC, an LNN-based feedforward controller, and a PID controller. Results demonstrate that LNN-CTC significantly outperforms model-based baselines by up to <inline-formula><tex-math>$30 ,%$</tex-math></inline-formula> in tracking accuracy, achieving high performance with minimal training data. In addition, LNN-CTC outperforms all other evaluated baselines in both tracking accuracy and data efficiency, attaining lower joint-space RMSE for the same training data. The findings highlight the potential of physics-informed neural architectures to generalize robustly across various operating conditions and contribute to narrowing the performance gap between learned and classical control strategies.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"2546-2553"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11352810","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146001868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Your Own: Pro-Level Autonomous Drone Racing in Uninstrumented Arenas 在你自己:专业级自主无人机比赛在无仪器的竞技场
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-14 DOI: 10.1109/LRA.2026.3653405
Michael Bosello;Flavio Pinzarrone;Sara Kiade;Davide Aguiari;Yvo Keuter;Aaesha AlShehhi;Gyordan Caminati;Kei Long Wong;Ka Seng Chou;Junaid Halepota;Fares Alneyadi;Jacopo Panerati;Giovanni Pau
Drone technology is proliferating in many industries, including agriculture, logistics, defense, infrastructure, and environmental monitoring. Vision-based autonomy is one of its key enablers, particularly for real-world applications. This is essential for operating in novel, unstructured environments where traditional navigation methods may be unavailable. Autonomous drone racing has become the de facto benchmark for such systems. State-of-the-art research has shown that autonomous systems can surpass human-level performance in racing arenas. However, the direct applicability to commercial and field operations is still limited, as current systems are often trained and evaluated in highly controlled environments. In our contribution, the system's capabilities are analyzed within a controlled environment—where external tracking is available for ground-truth comparison—but also demonstrated in a challenging, uninstrumented environment—where ground-truth measurements were never available. We show that our approach can match the performance of professional human pilots in both scenarios.
无人机技术正在农业、物流、国防、基础设施和环境监测等许多行业蓬勃发展。基于视觉的自主性是它的关键推动因素之一,特别是对于现实世界的应用程序。这对于在传统导航方法可能不可用的新颖、非结构化环境中操作至关重要。自主无人机竞赛已经成为此类系统事实上的基准。最先进的研究表明,自主系统可以在赛场上超越人类的水平。然而,对商业和现场作业的直接适用性仍然有限,因为目前的系统通常是在高度控制的环境中训练和评估的。在我们的贡献中,系统的功能在受控环境中进行了分析,其中外部跟踪可用于地面真值比较,但也在具有挑战性的无仪器环境中进行了演示,其中地面真值测量从未可用。我们证明,在这两种情况下,我们的方法可以匹配专业人类飞行员的表现。
{"title":"On Your Own: Pro-Level Autonomous Drone Racing in Uninstrumented Arenas","authors":"Michael Bosello;Flavio Pinzarrone;Sara Kiade;Davide Aguiari;Yvo Keuter;Aaesha AlShehhi;Gyordan Caminati;Kei Long Wong;Ka Seng Chou;Junaid Halepota;Fares Alneyadi;Jacopo Panerati;Giovanni Pau","doi":"10.1109/LRA.2026.3653405","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653405","url":null,"abstract":"Drone technology is proliferating in many industries, including agriculture, logistics, defense, infrastructure, and environmental monitoring. Vision-based autonomy is one of its key enablers, particularly for real-world applications. This is essential for operating in novel, unstructured environments where traditional navigation methods may be unavailable. Autonomous drone racing has become the <italic>de facto</i> benchmark for such systems. State-of-the-art research has shown that autonomous systems can surpass human-level performance in racing arenas. However, the direct applicability to commercial and field operations is still limited, as current systems are often trained and evaluated in highly controlled environments. In our contribution, the system's capabilities are analyzed within a controlled environment—where external tracking is available for ground-truth comparison—but also demonstrated in a challenging, uninstrumented environment—where ground-truth measurements were never available. We show that our approach can match the performance of professional human pilots in both scenarios.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"2674-2681"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11347474","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super-LIO: A Robust and Efficient LiDAR-Inertial Odometry System With a Compact Mapping Strategy Super-LIO:一种具有紧凑映射策略的鲁棒高效激光雷达惯性里程计系统
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-14 DOI: 10.1109/LRA.2026.3653372
Liansheng Wang;Xinke Zhang;Chenhui Li;Dongjiao He;Yihan Pan;Jianjun Yi
LiDAR-Inertial Odometry (LIO) is a foundational technique for autonomous systems, yet its deployment on resource-constrained platforms remains challenging due to computational and memory limitations. We propose Super-LIO, a robust LIO system that demands both high performance and accuracy, ideal for applications such as aerial robots and mobile autonomous systems. At the core of Super-LIO is a compact octo-voxel-based map structure, termed OctVox, that limits each voxel to eight subvoxel representatives, enabling strict point density control and incremental denoising during map updates. This design enables a simple yet efficient and accurate map structure, which can be easily integrated into existing LIO frameworks. Additionally, Super-LIO designs a heuristic-guided KNN strategy (HKNN) that accelerates the correspondence search by leveraging spatial locality, further reducing runtime overhead. We evaluated the proposed system using four publicly available datasets and several self-collected datasets, totaling more than 30 sequences. Extensive testing on both X86 and ARM platforms confirms that Super-LIO offers superior efficiency and robustness, while maintaining competitive accuracy. Super-LIO processes each frame approximately 73% faster than SOTA, while consuming less CPU resources. The system is fully open-source and compatible with a wide range of LiDAR sensors and computing platforms. The implementation is available at: https://github.com/Liansheng-Wang/Super-LIO.git.
LiDAR-Inertial Odometry (LIO)是自动驾驶系统的一项基础技术,但由于计算和内存限制,其在资源受限平台上的部署仍然具有挑战性。我们提出Super-LIO,这是一种强大的LIO系统,要求高性能和高精度,是空中机器人和移动自主系统等应用的理想选择。Super-LIO的核心是一个紧凑的基于八元体素的地图结构,称为OctVox,它将每个体素限制为八个子体素代表,从而在地图更新期间实现严格的点密度控制和增量去噪。该设计实现了一种简单、高效、准确的地图结构,可以很容易地集成到现有的LIO框架中。此外,Super-LIO设计了一个启发式引导的KNN策略(HKNN),通过利用空间局部性来加速对应搜索,进一步减少运行时开销。我们使用四个公开可用的数据集和几个自收集的数据集来评估所提出的系统,总共超过30个序列。在X86和ARM平台上的广泛测试证实,Super-LIO提供了卓越的效率和稳健性,同时保持了具有竞争力的准确性。Super-LIO处理每帧的速度比SOTA快73%,同时消耗更少的CPU资源。该系统是完全开源的,与各种激光雷达传感器和计算平台兼容。实现可在:https://github.com/Liansheng-Wang/Super-LIO.git。
{"title":"Super-LIO: A Robust and Efficient LiDAR-Inertial Odometry System With a Compact Mapping Strategy","authors":"Liansheng Wang;Xinke Zhang;Chenhui Li;Dongjiao He;Yihan Pan;Jianjun Yi","doi":"10.1109/LRA.2026.3653372","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653372","url":null,"abstract":"LiDAR-Inertial Odometry (LIO) is a foundational technique for autonomous systems, yet its deployment on resource-constrained platforms remains challenging due to computational and memory limitations. We propose Super-LIO, a robust LIO system that demands both high performance and accuracy, ideal for applications such as aerial robots and mobile autonomous systems. At the core of Super-LIO is a compact octo-voxel-based map structure, termed <bold>OctVox</b>, that limits each voxel to eight subvoxel representatives, enabling strict point density control and incremental denoising during map updates. This design enables a simple yet efficient and accurate map structure, which can be easily integrated into existing LIO frameworks. Additionally, Super-LIO designs a heuristic-guided KNN strategy (HKNN) that accelerates the correspondence search by leveraging spatial locality, further reducing runtime overhead. We evaluated the proposed system using four publicly available datasets and several self-collected datasets, totaling more than 30 sequences. Extensive testing on both X86 and ARM platforms confirms that Super-LIO offers superior efficiency and robustness, while maintaining competitive accuracy. Super-LIO processes each frame approximately 73% faster than SOTA, while consuming less CPU resources. The system is fully open-source and compatible with a wide range of LiDAR sensors and computing platforms. The implementation is available at: <uri>https://github.com/Liansheng-Wang/Super-LIO.git</uri>.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"2666-2673"},"PeriodicalIF":5.3,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event Spectroscopy: Event-Based Multispectral and Depth Sensing Using Structured Light 事件光谱学:基于事件的多光谱和使用结构光的深度传感
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-13 DOI: 10.1109/LRA.2026.3653368
Christian Geckeler;Niklas Neugebauer;Manasi Muglikar;Davide Scaramuzza;Stefano Mintchev
Uncrewed aerial vehicles (UAVs) are increasingly deployed in forest environments for tasks such as environmental monitoring and search and rescue, which require safe navigation through dense foliage and precise data collection. Traditional sensing approaches, including passive multispectral and RGB imaging, suffer from latency, poor depth resolution, and strong dependence on ambient light—especially under forest canopies. In this work, we present a novel event spectroscopy system that simultaneously enables high-resolution, low-latency depth reconstruction with integrated multispectral imaging using a single sensor. Depth is reconstructed using structured light, and by modulating the wavelength of the projected structured light, our system captures spectral information in controlled bands between 650 nm and 850 nm. We demonstrate up to 60% improvement in RMSE over commercial depth sensors and validate the spectral accuracy against a reference spectrometer and commercial multispectral cameras, demonstrating comparable performance. A portable version limited to RGB is used to collect real-world depth and spectral data from a Masoala Rainforest. We demonstrate color image reconstruction and material differentiation between leaves and branches using this spectral and depth data. Our results show that adding depth (available at no extra effort with our setup) to material differentiation improves the accuracy by over 30% compared to color-only method. Our system, tested in both lab and real-world rainforest environments, shows strong performance in depth estimation, RGB reconstruction, and material differentiation—paving the way for lightweight, integrated, and robust UAV perception and data collection in complex natural environments.
无人驾驶飞行器(uav)越来越多地部署在森林环境中,用于环境监测和搜救等任务,这些任务需要在茂密的树叶中安全导航和精确的数据收集。传统的传感方法,包括被动多光谱和RGB成像,存在延迟、深度分辨率差、对环境光依赖性强的问题,尤其是在森林冠层下。在这项工作中,我们提出了一种新的事件光谱系统,该系统使用单个传感器同时实现高分辨率,低延迟深度重建和集成多光谱成像。利用结构光重建深度,并通过调制投影结构光的波长,我们的系统在650 nm和850 nm之间的可控波段捕获光谱信息。我们展示了与商用深度传感器相比RMSE提高了60%,并通过参考光谱仪和商用多光谱相机验证了光谱精度,展示了相当的性能。便携式版本仅限于RGB,用于从马索阿拉雨林收集真实世界的深度和光谱数据。我们展示了彩色图像重建和材料区分树叶和树枝使用这个光谱和深度数据。我们的结果表明,与仅使用颜色的方法相比,在材料区分中添加深度(无需额外的努力)可以提高30%以上的准确性。我们的系统在实验室和真实雨林环境中进行了测试,在深度估计、RGB重建和材料区分方面表现出强大的性能,为在复杂的自然环境中实现轻型、集成和鲁棒的无人机感知和数据收集铺平了道路。
{"title":"Event Spectroscopy: Event-Based Multispectral and Depth Sensing Using Structured Light","authors":"Christian Geckeler;Niklas Neugebauer;Manasi Muglikar;Davide Scaramuzza;Stefano Mintchev","doi":"10.1109/LRA.2026.3653368","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653368","url":null,"abstract":"Uncrewed aerial vehicles (UAVs) are increasingly deployed in forest environments for tasks such as environmental monitoring and search and rescue, which require safe navigation through dense foliage and precise data collection. Traditional sensing approaches, including passive multispectral and RGB imaging, suffer from latency, poor depth resolution, and strong dependence on ambient light—especially under forest canopies. In this work, we present a novel event spectroscopy system that simultaneously enables high-resolution, low-latency depth reconstruction with integrated multispectral imaging using a single sensor. Depth is reconstructed using structured light, and by modulating the wavelength of the projected structured light, our system captures spectral information in controlled bands between 650 nm and 850 nm. We demonstrate up to 60% improvement in RMSE over commercial depth sensors and validate the spectral accuracy against a reference spectrometer and commercial multispectral cameras, demonstrating comparable performance. A portable version limited to RGB is used to collect real-world depth and spectral data from a Masoala Rainforest. We demonstrate color image reconstruction and material differentiation between leaves and branches using this spectral and depth data. Our results show that adding depth (available at no extra effort with our setup) to material differentiation improves the accuracy by over 30% compared to color-only method. Our system, tested in both lab and real-world rainforest environments, shows strong performance in depth estimation, RGB reconstruction, and material differentiation—paving the way for lightweight, integrated, and robust UAV perception and data collection in complex natural environments.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"2658-2665"},"PeriodicalIF":5.3,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating High-Capacity Ridepooling in Robo-Taxi Systems 加速机器人出租车系统中的大容量拼车
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-13 DOI: 10.1109/LRA.2026.3653376
Xinling Li;Daniele Gammelli;Alex Wallar;Jinhua Zhao;Gioele Zardini
Rapid urbanization has increased demand for customized urban mobility, making on-demand services and robo-taxis central to future transportation. The efficiency of these systems hinges on real-time fleet coordination algorithms. This work accelerates the state-of-the-art high-capacity ridepooling framework by identifying its computational bottlenecks and introducing two complementary strategies: (i) a data-driven feasibility predictor that filters low-potential trips, and (ii) a graph-partitioning scheme that enables parallelizable trip generation. Using real-world Manhattan demand data, we show that the acceleration algorithms reduce the optimality gap by up to 27% under real-time constraints and cut empty travel time by up to 5%. These improvements translate into tangible economic and environmental benefits, advancing the scalability of high-capacity robo-taxi operations in dense urban settings.
快速城市化增加了对定制城市交通的需求,使按需服务和机器人出租车成为未来交通的核心。这些系统的效率取决于实时车队协调算法。这项工作通过识别其计算瓶颈并引入两种互补策略来加速最先进的高容量拼车框架:(i)过滤低潜力行程的数据驱动可行性预测器,以及(ii)实现并行行程生成的图分区方案。使用真实的曼哈顿需求数据,我们表明,在实时约束下,加速算法将最优性差距减少了27%,并将空行时间减少了5%。这些改进转化为切实的经济和环境效益,提高了高容量自动驾驶出租车在密集城市环境中的可扩展性。
{"title":"Accelerating High-Capacity Ridepooling in Robo-Taxi Systems","authors":"Xinling Li;Daniele Gammelli;Alex Wallar;Jinhua Zhao;Gioele Zardini","doi":"10.1109/LRA.2026.3653376","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653376","url":null,"abstract":"Rapid urbanization has increased demand for customized urban mobility, making on-demand services and robo-taxis central to future transportation. The efficiency of these systems hinges on real-time fleet coordination algorithms. This work accelerates the state-of-the-art high-capacity ridepooling framework by identifying its computational bottlenecks and introducing two complementary strategies: (i) a data-driven feasibility predictor that filters low-potential trips, and (ii) a graph-partitioning scheme that enables parallelizable trip generation. Using real-world Manhattan demand data, we show that the acceleration algorithms reduce the optimality gap by up to 27% under real-time constraints and cut empty travel time by up to 5%. These improvements translate into tangible economic and environmental benefits, advancing the scalability of high-capacity robo-taxi operations in dense urban settings.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"2450-2457"},"PeriodicalIF":5.3,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146001873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning to Anchor Visual Odometry: KAN-Based Pose Regression for Planetary Landing 学习锚定视觉里程计:基于kan的行星着陆姿势回归
IF 5.3 2区 计算机科学 Q2 ROBOTICS Pub Date : 2026-01-13 DOI: 10.1109/LRA.2026.3653384
Xubo Luo;Zhaojin Li;Xue Wan;Wei Zhang;Leizheng Shu
Accurate and real-time 6-DoF localization is mission-critical for autonomous lunar landing, yet existing approaches remain limited: visual odometry (VO) drifts unboundedly, while map-based absolute localization fails in texture-sparse or low-light terrain. We introduce KANLoc, a monocular localization framework that tightly couples VO with a lightweight but robust absolute pose regressor. At its core is a Kolmogorov–Arnold Network (KAN) that learns the complex mapping from image features to map coordinates, producing sparse but highly reliable global pose anchors. These anchors are fused into a bundle adjustment framework, effectively canceling drift while retaining local motion precision. KANLoc delivers three key advances: (i) a KAN-based pose regressor that achieves high accuracy with remarkable parameter efficiency, (ii) a hybrid VO–absolute localization scheme that yields globally consistent real-time trajectories ($geq$15 FPS), and (iii) a tailored data augmentation strategy that improves robustness to sensor occlusion. On both realistic synthetic and real lunar landing datasets, KANLoc reduces average translation and rotation error by 32% and 45%, respectively, with per-trajectory gains of up to 45% /48%, outperforming strong baselines.
准确实时的六自由度定位对于自主登月至关重要,但现有的方法仍然存在局限性:视觉里程计(VO)无边界漂移,而基于地图的绝对定位在纹理稀疏或低光照地形中失败。我们介绍了KANLoc,这是一个单目定位框架,它将VO与轻量级但鲁棒的绝对姿态回归器紧密耦合。其核心是Kolmogorov-Arnold网络(KAN),该网络学习从图像特征到地图坐标的复杂映射,产生稀疏但高度可靠的全局姿态锚。这些锚融合成一个束调整框架,有效地消除漂移,同时保持局部运动精度。KANLoc提供了三个关键的进步:(i)基于kan的姿态回归器,以显着的参数效率实现高精度,(ii)混合vo -绝对定位方案,产生全球一致的实时轨迹($geq$ 15 FPS),以及(iii)量身定制的数据增强策略,提高对传感器闭塞的鲁棒性。在现实合成和真实登月数据集上,KANLoc将平均平移和旋转误差降低了32% and 45%, respectively, with per-trajectory gains of up to 45% /48%, outperforming strong baselines.
{"title":"Learning to Anchor Visual Odometry: KAN-Based Pose Regression for Planetary Landing","authors":"Xubo Luo;Zhaojin Li;Xue Wan;Wei Zhang;Leizheng Shu","doi":"10.1109/LRA.2026.3653384","DOIUrl":"https://doi.org/10.1109/LRA.2026.3653384","url":null,"abstract":"Accurate and real-time 6-DoF localization is mission-critical for autonomous lunar landing, yet existing approaches remain limited: visual odometry (VO) drifts unboundedly, while map-based absolute localization fails in texture-sparse or low-light terrain. We introduce KANLoc, a monocular localization framework that tightly couples VO with a lightweight but robust absolute pose regressor. At its core is a Kolmogorov–Arnold Network (KAN) that learns the complex mapping from image features to map coordinates, producing sparse but highly reliable global pose anchors. These anchors are fused into a bundle adjustment framework, effectively canceling drift while retaining local motion precision. KANLoc delivers three key advances: (i) a KAN-based pose regressor that achieves high accuracy with remarkable parameter efficiency, (ii) a hybrid VO–absolute localization scheme that yields globally consistent real-time trajectories (<inline-formula><tex-math>$geq$</tex-math></inline-formula>15 FPS), and (iii) a tailored data augmentation strategy that improves robustness to sensor occlusion. On both realistic synthetic and real lunar landing datasets, KANLoc reduces average translation and rotation error by 32% and 45%, respectively, with per-trajectory gains of up to 45% /48%, outperforming strong baselines.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"11 3","pages":"3574-3581"},"PeriodicalIF":5.3,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146175820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Robotics and Automation Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1