首页 > 最新文献

IEEE Robotics and Automation Letters最新文献

英文 中文
Directional Correspondence Based Cross-Source Point Cloud Registration for USV-AAV Cooperation in Lentic Environments 基于方向对应的虚拟环境下USV-AAV协同跨源点云配准
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3523232
Byoungkwon Yoon;Seokhyun Hong;Dongjun Lee
We propose a novel cross-source point cloud registration (CSPR) method for USV-AAV cooperation in lentic environments. In the wild outdoors, which is the typical working domain of the USV-AAV team, CSPR faces significant challenges due to platform-domain problems (complex unstructured surroundings and viewing angle difference) in addition to sensor-domain problems (varying density, noise pattern, and scale). These characteristics make large discrepancies in local geometry, causing existing CSPR methods that rely on point-to-point correspondence based on local geometry around key points (e.g. surface normal, shape function, angle) to struggle. To address this challenge, we propose the novel concept of a directional correspondence-based iterative cross-source point cloud registration algorithm. Instead of using point-to-point correspondence under large discrepancies in local geometry, we build correspondence about directions to enable robust registration in the wild outdoors. Also, since the proposed directional correspondence uses bearing angle and normalized coordinate, we can separate scale estimation with transformation, effectively resolving the problem of different scales between two point clouds. Our algorithm outperforms the state-of-the-art methods, achieving an average error of $1.60^circ$ for rotation and 1.83% for translation. Additionally, we demonstrated a USV-AAV team operation with enhanced visual information achieved with the proposed method.
提出了一种新颖的跨源点云配准(CSPR)方法,用于虚拟环境下的USV-AAV协作。在野外,这是USV-AAV团队的典型工作领域,由于平台域问题(复杂的非结构化环境和视角差异)以及传感器域问题(不同的密度、噪声模式和规模),CSPR面临着重大挑战。这些特征使得局部几何存在很大的差异,导致现有的CSPR方法依赖于基于关键点周围局部几何(如表面法线、形状函数、角度)的点对点对应。为了解决这一挑战,我们提出了一种基于方向对应的迭代交叉源点云配准算法的新概念。而不是使用点对点对应下的大差异的局部几何,我们建立对应的方向,以实现在野外户外的鲁棒配准。此外,由于所提出的方向对应使用了方位角和归一化坐标,因此可以将尺度估计与变换分离,有效地解决了两个点云之间不同尺度的问题。我们的算法优于最先进的方法,旋转的平均误差为1.60^circ$,平移的平均误差为1.83%。此外,我们还演示了USV-AAV团队操作,该方法增强了视觉信息。
{"title":"Directional Correspondence Based Cross-Source Point Cloud Registration for USV-AAV Cooperation in Lentic Environments","authors":"Byoungkwon Yoon;Seokhyun Hong;Dongjun Lee","doi":"10.1109/LRA.2024.3523232","DOIUrl":"https://doi.org/10.1109/LRA.2024.3523232","url":null,"abstract":"We propose a novel cross-source point cloud registration (CSPR) method for USV-AAV cooperation in lentic environments. In the wild outdoors, which is the typical working domain of the USV-AAV team, CSPR faces significant challenges due to platform-domain problems (complex unstructured surroundings and viewing angle difference) in addition to sensor-domain problems (varying density, noise pattern, and scale). These characteristics make large discrepancies in local geometry, causing existing CSPR methods that rely on point-to-point correspondence based on local geometry around key points (e.g. surface normal, shape function, angle) to struggle. To address this challenge, we propose the novel concept of a directional correspondence-based iterative cross-source point cloud registration algorithm. Instead of using point-to-point correspondence under large discrepancies in local geometry, we build correspondence about directions to enable robust registration in the wild outdoors. Also, since the proposed directional correspondence uses bearing angle and normalized coordinate, we can separate scale estimation with transformation, effectively resolving the problem of different scales between two point clouds. Our algorithm outperforms the state-of-the-art methods, achieving an average error of \u0000<inline-formula><tex-math>$1.60^circ$</tex-math></inline-formula>\u0000 for rotation and 1.83% for translation. Additionally, we demonstrated a USV-AAV team operation with enhanced visual information achieved with the proposed method.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1601-1608"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10816390","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142940756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-Vocabulary Mobile Manipulation Based on Double Relaxed Contrastive Learning With Dense Labeling 基于密集标注双松弛对比学习的开放词汇移动操作
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3522841
Daichi Yashima;Ryosuke Korekata;Komei Sugiura
Growing labor shortages are increasing the demand for domestic service robots (DSRs) to assist in various settings. In this study, we develop a DSR that transports everyday objects to specified pieces of furniture based on open-vocabulary instructions. Our approach focuses on retrieving images of target objects and receptacles from pre-collected images of indoor environments. For example, given an instruction “Please get the right red towel hanging on the metal towel rack and put it in the white washing machine on the left,” the DSR is expected to carry the red towel to the washing machine based on the retrieved images. This is challenging because the correct images should be retrieved from thousands of collected images, which may include many images of similar towels and appliances. To address this, we propose RelaX-Former, which learns diverse and robust representations from among positive, unlabeled positive, and negative samples. We evaluated RelaX-Former on a dataset containing real-world indoor images and human annotated instructions including complex referring expressions. The experimental results demonstrate that RelaX-Former outperformed existing baseline models across standard image retrieval metrics. Moreover, we performed physical experiments using a DSR to evaluate the performance of our approach in a zero-shot transfer setting. The experiments involved the DSR to carry objects to specific receptacles based on open-vocabulary instructions, achieving an overall success rate of 75%.
日益严重的劳动力短缺增加了对家庭服务机器人(dsr)的需求,以协助各种设置。在这项研究中,我们开发了一个DSR,可以根据开放词汇指令将日常物品运送到指定的家具上。我们的方法侧重于从预先收集的室内环境图像中检索目标物体和容器的图像。例如,给DSR“请把右边的红色毛巾挂在金属毛巾架上,放到左边的白色洗衣机里”的指令,DSR就会根据检索到的图像把红色毛巾送到洗衣机里。这是具有挑战性的,因为正确的图像应该从数千个收集的图像中检索,其中可能包括许多类似毛巾和电器的图像。为了解决这个问题,我们提出了RelaX-Former,它从正样本、未标记的正样本和负样本中学习不同的鲁棒表示。我们在包含真实世界室内图像和包含复杂引用表达式的人类注释指令的数据集上评估了RelaX-Former。实验结果表明,RelaX-Former在标准图像检索指标上优于现有的基线模型。此外,我们使用DSR进行了物理实验,以评估我们的方法在零射转移设置中的性能。在实验中,DSR根据开放词汇指令将物体搬运到特定的容器中,总体成功率为75%。
{"title":"Open-Vocabulary Mobile Manipulation Based on Double Relaxed Contrastive Learning With Dense Labeling","authors":"Daichi Yashima;Ryosuke Korekata;Komei Sugiura","doi":"10.1109/LRA.2024.3522841","DOIUrl":"https://doi.org/10.1109/LRA.2024.3522841","url":null,"abstract":"Growing labor shortages are increasing the demand for domestic service robots (DSRs) to assist in various settings. In this study, we develop a DSR that transports everyday objects to specified pieces of furniture based on open-vocabulary instructions. Our approach focuses on retrieving images of target objects and receptacles from pre-collected images of indoor environments. For example, given an instruction “Please get the right red towel hanging on the metal towel rack and put it in the white washing machine on the left,” the DSR is expected to carry the red towel to the washing machine based on the retrieved images. This is challenging because the correct images should be retrieved from thousands of collected images, which may include many images of similar towels and appliances. To address this, we propose RelaX-Former, which learns diverse and robust representations from among positive, unlabeled positive, and negative samples. We evaluated RelaX-Former on a dataset containing real-world indoor images and human annotated instructions including complex referring expressions. The experimental results demonstrate that RelaX-Former outperformed existing baseline models across standard image retrieval metrics. Moreover, we performed physical experiments using a DSR to evaluate the performance of our approach in a zero-shot transfer setting. The experiments involved the DSR to carry objects to specific receptacles based on open-vocabulary instructions, achieving an overall success rate of 75%.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1728-1735"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structured Pruning for Efficient Visual Place Recognition 高效视觉位置识别的结构化剪枝
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3523222
Oliver Grainge;Michael Milford;Indu Bodala;Sarvapali D. Ramchurn;Shoaib Ehsan
Visual Place Recognition (VPR) is fundamental for the global re-localization of robots and devices, enabling them to recognize previously visited locations based on visual inputs. This capability is crucial for maintaining accurate mapping and localization over large areas. Given that VPR methods need to operate in real-time on embedded systems, it is critical to optimize these systems for minimal resource consumption. While the most efficient VPR approaches employ standard convolutional backbones with fixed descriptor dimensions, these often lead to redundancy in the embedding space as well as in the network architecture. Our work introduces a novel structured pruning method, to not only streamline common VPR architectures but also to strategically remove redundancies within the feature embedding space. This dual focus significantly enhances the efficiency of the system, reducing both map and model memory requirements and decreasing feature extraction and retrieval latencies. Our approach has reduced memory usage and latency by 21% and 16%, respectively, across models, while minimally impacting recall@1 accuracy by less than 1%. This significant improvement enhances real-time applications on edge devices with negligible accuracy loss.
视觉位置识别(VPR)是机器人和设备全局重新定位的基础,使它们能够根据视觉输入识别以前访问过的位置。这种能力对于在大范围内保持精确的地图和定位至关重要。考虑到VPR方法需要在嵌入式系统上实时运行,优化这些系统以最小化资源消耗是至关重要的。虽然最有效的VPR方法使用具有固定描述符维数的标准卷积主干,但这些方法通常会导致嵌入空间和网络体系结构中的冗余。我们的工作引入了一种新的结构化修剪方法,不仅简化了常见的VPR体系结构,而且还战略性地消除了特征嵌入空间中的冗余。这种双重关注显著提高了系统的效率,减少了地图和模型的内存需求,减少了特征提取和检索的延迟。我们的方法在各个模型中分别减少了21%和16%的内存使用和延迟,同时对recall@1准确性的影响小于1%。这一重大改进增强了边缘设备上的实时应用,精度损失可以忽略不计。
{"title":"Structured Pruning for Efficient Visual Place Recognition","authors":"Oliver Grainge;Michael Milford;Indu Bodala;Sarvapali D. Ramchurn;Shoaib Ehsan","doi":"10.1109/LRA.2024.3523222","DOIUrl":"https://doi.org/10.1109/LRA.2024.3523222","url":null,"abstract":"Visual Place Recognition (VPR) is fundamental for the global re-localization of robots and devices, enabling them to recognize previously visited locations based on visual inputs. This capability is crucial for maintaining accurate mapping and localization over large areas. Given that VPR methods need to operate in real-time on embedded systems, it is critical to optimize these systems for minimal resource consumption. While the most efficient VPR approaches employ standard convolutional backbones with fixed descriptor dimensions, these often lead to redundancy in the embedding space as well as in the network architecture. Our work introduces a novel structured pruning method, to not only streamline common VPR architectures but also to strategically remove redundancies within the feature embedding space. This dual focus significantly enhances the efficiency of the system, reducing both map and model memory requirements and decreasing feature extraction and retrieval latencies. Our approach has reduced memory usage and latency by 21% and 16%, respectively, across models, while minimally impacting recall@1 accuracy by less than 1%. This significant improvement enhances real-time applications on edge devices with negligible accuracy loss.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"2024-2031"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142993064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inverse Design of Snap-Actuated Jumping Robots Powered by Mechanics-Aided Machine Learning 基于机械辅助机器学习的弹跳机器人逆设计
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3523218
Dezhong Tong;Zhuonan Hao;Mingchao Liu;Weicheng Huang
Simulating soft robots offers a cost-effective approach to exploring their design and control strategies. While current models, such as finite element analysis, are effective in capturing soft robotic dynamics, the field still requires a broadly applicable and efficient numerical simulation method. In this letter, we introduce a discrete differential geometry-based framework for the model-based inverse design of a novel snap-actuated jumping robot. Our findings reveal that the snapping beam actuator exhibits both symmetric and asymmetric dynamic modes, enabling tunable robot trajectories (e.g., horizontal or vertical jumps). Leveraging this bistable beam as a robotic actuator, we propose a physics-data hybrid inverse design strategy to endow the snap-jump robot with a diverse range of jumping capabilities. By utilizing a physical engine to examine the effects of design parameters on jump dynamics, we then use extensive simulation data to establish a data-driven inverse design solution. This approach allows rapid exploration of parameter spaces to achieve targeted jump trajectories, providing a robust foundation for the robot's fabrication. Our methodology offers a powerful framework for advancing the design and control of soft robots through integrated simulation and data-driven techniques.
模拟软体机器人为探索其设计和控制策略提供了一种经济有效的方法。虽然有限元分析等现有模型能有效捕捉软机器人动力学,但该领域仍需要一种广泛适用且高效的数值模拟方法。在这封信中,我们介绍了一种基于离散微分几何的框架,用于基于模型的新型扣动式跳跃机器人的逆向设计。我们的研究结果表明,折跃梁致动器同时表现出对称和非对称两种动态模式,从而实现了可调整的机器人轨迹(例如水平或垂直跳跃)。利用这种双稳态梁作为机器人致动器,我们提出了一种物理-数据混合反向设计策略,以赋予弹跳机器人多种跳跃能力。通过利用物理引擎来研究设计参数对跳跃动力学的影响,然后利用大量模拟数据来建立数据驱动的逆向设计方案。这种方法允许快速探索参数空间,以实现目标跳跃轨迹,为机器人的制造奠定了坚实的基础。我们的方法提供了一个强大的框架,可通过集成仿真和数据驱动技术推进软机器人的设计和控制。
{"title":"Inverse Design of Snap-Actuated Jumping Robots Powered by Mechanics-Aided Machine Learning","authors":"Dezhong Tong;Zhuonan Hao;Mingchao Liu;Weicheng Huang","doi":"10.1109/LRA.2024.3523218","DOIUrl":"https://doi.org/10.1109/LRA.2024.3523218","url":null,"abstract":"Simulating soft robots offers a cost-effective approach to exploring their design and control strategies. While current models, such as finite element analysis, are effective in capturing soft robotic dynamics, the field still requires a broadly applicable and efficient numerical simulation method. In this letter, we introduce a discrete differential geometry-based framework for the model-based inverse design of a novel snap-actuated jumping robot. Our findings reveal that the snapping beam actuator exhibits both symmetric and asymmetric dynamic modes, enabling tunable robot trajectories (e.g., horizontal or vertical jumps). Leveraging this bistable beam as a robotic actuator, we propose a physics-data hybrid inverse design strategy to endow the snap-jump robot with a diverse range of jumping capabilities. By utilizing a physical engine to examine the effects of design parameters on jump dynamics, we then use extensive simulation data to establish a data-driven inverse design solution. This approach allows rapid exploration of parameter spaces to achieve targeted jump trajectories, providing a robust foundation for the robot's fabrication. Our methodology offers a powerful framework for advancing the design and control of soft robots through integrated simulation and data-driven techniques.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1720-1727"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAST-LIEO: Fast and Real-Time LiDAR-Inertial-Event-Visual Odometry Fast - lieo:快速实时激光雷达-惯性事件-视觉里程计
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3522843
Zirui Wang;Yangtao Ge;Kewei Dong;I-Ming Chen;Jing Wu
Unlike a standard camera that relies on exposure to obtain output frame by frame, an event camera only outputs an event when the change of brightness intensity in a pixel exceeds a threshold, and the outputs of different pixels are independent to each other. Benefited from its bio-inspired design, event camera has the advantages of low latency and high dynamic range. The researches on multi-sensor fusion with event camera are few so far. In this paper, we propose FAST-LIEO, a framework for fast and real-time LiDAR-inertial-event odometry. The framework tightly fuses LiDAR and event camera measurements without any feature extraction or matching. Besides, our system supports both LIEO and LIEVO (extended with RGB camera fusion). We design a novel EIO subsystem for LiDAR-event fusion. The EIO subsystem maintains a semi-dense event map and estimates the state by aligning the event representation to map. The semi-dense event map is built from LiDAR points by utilizing the edge information and temporal information provided by event representations. Besides testing our method on public benchmark dataset, we also collected real-world data by utilizing our sensor suite and conducted experiments on our self-captured dataset. The experiment results show the high robustness and accuracy of our method in challenging conditions with high real-time ability. To the best of our knowledge, our FAST-LIEO is the first system that can tightly fuse LiDAR, IMU, event camera and standard camera measurements in simultaneously localization and mapping.
与标准相机依靠曝光逐帧获取输出不同,事件相机仅在一个像素的亮度强度变化超过阈值时才输出事件,不同像素的输出是相互独立的。得益于其仿生设计,事件相机具有低延迟和高动态范围的优点。多传感器与事件相机融合的研究目前还很少。在本文中,我们提出了fast - lieo,一个快速和实时的激光雷达惯性事件里程测量框架。该框架紧密融合了激光雷达和事件相机的测量,没有任何特征提取或匹配。此外,我们的系统支持LIEO和LIEVO(扩展了RGB相机融合)。设计了一种用于激光雷达-事件融合的新型EIO子系统。EIO子系统维护一个半密集的事件映射,并通过将事件表示与映射对齐来估计状态。利用事件表示提供的边缘信息和时间信息,从LiDAR点构建半密集事件地图。除了在公共基准数据集上测试我们的方法外,我们还利用我们的传感器套件收集了真实世界的数据,并在我们自己捕获的数据集上进行了实验。实验结果表明,该方法具有较高的鲁棒性和准确性,具有较高的实时性。据我们所知,FAST-LIEO是第一个能够将激光雷达、IMU、事件相机和标准相机测量紧密融合在一起,同时进行定位和测绘的系统。
{"title":"FAST-LIEO: Fast and Real-Time LiDAR-Inertial-Event-Visual Odometry","authors":"Zirui Wang;Yangtao Ge;Kewei Dong;I-Ming Chen;Jing Wu","doi":"10.1109/LRA.2024.3522843","DOIUrl":"https://doi.org/10.1109/LRA.2024.3522843","url":null,"abstract":"Unlike a standard camera that relies on exposure to obtain output frame by frame, an event camera only outputs an event when the change of brightness intensity in a pixel exceeds a threshold, and the outputs of different pixels are independent to each other. Benefited from its bio-inspired design, event camera has the advantages of low latency and high dynamic range. The researches on multi-sensor fusion with event camera are few so far. In this paper, we propose FAST-LIEO, a framework for fast and real-time LiDAR-inertial-event odometry. The framework tightly fuses LiDAR and event camera measurements without any feature extraction or matching. Besides, our system supports both LIEO and LIEVO (extended with RGB camera fusion). We design a novel EIO subsystem for LiDAR-event fusion. The EIO subsystem maintains a semi-dense event map and estimates the state by aligning the event representation to map. The semi-dense event map is built from LiDAR points by utilizing the edge information and temporal information provided by event representations. Besides testing our method on public benchmark dataset, we also collected real-world data by utilizing our sensor suite and conducted experiments on our self-captured dataset. The experiment results show the high robustness and accuracy of our method in challenging conditions with high real-time ability. To the best of our knowledge, our FAST-LIEO is the first system that can tightly fuse LiDAR, IMU, event camera and standard camera measurements in simultaneously localization and mapping.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1680-1687"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Thinking Before Decision: Efficient Interactive Visual Navigation Based on Local Accessibility Prediction 决策前思考:基于局部可达性预测的高效交互式视觉导航
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3522769
Qinrui Liu;Biao Luo;Dongbo Zhang;Renjie Chen
Embodied AI has made prominent advances in interactive visual navigation tasks based on deep reinforcement learning. In the pursuit of higher success rates in navigation, previous work has typically focused on training embodied agents to push away interactable objects on the ground. However, such interactive visual navigation largely ignores the cost of interacting with the environment and interactions are sometimes counterproductive (e.g., push the obstacle but block the existing path). Considering these scenarios, we develop a efficient interactive visual navigation method. We propose Local Accessibility Prediction (LAP) Module to enable the agent to learn thinking about how the upcoming action will affect the environment and the navigation task before making a decision. Besides, we introduce the interaction penalty term to represent the cost of interacting with the environment. And different interaction penalties are imposed depending on the size of the obstacle pushed away. We introduce the average number of interactions as a new evaluation metric. Also, a two-stage training pipeline is employed to reach better learning performance. Our experiments in AI2-THOR environment show that our method outperforms the baseline in all evaluation metrics, achieving significant improvements in navigation performance.
嵌入式人工智能在基于深度强化学习的交互式视觉导航任务方面取得了突出进展。为了追求更高的导航成功率,以往的工作通常侧重于训练嵌入式代理推开地面上可交互的物体。然而,这种交互式视觉导航在很大程度上忽略了与环境交互的成本,而且交互有时会适得其反(例如,推开障碍物却阻碍了现有路径)。考虑到这些情况,我们开发了一种高效的交互式视觉导航方法。我们提出了本地可及性预测(LAP)模块,使代理在做出决定之前,能够学习思考即将采取的行动将如何影响环境和导航任务。此外,我们还引入了交互惩罚项来表示与环境交互的成本。根据被推开的障碍物的大小,我们会施加不同的交互惩罚。我们引入了平均交互次数作为新的评估指标。此外,我们还采用了两阶段训练流水线,以达到更好的学习效果。我们在 AI2-THOR 环境中的实验表明,我们的方法在所有评价指标上都优于基线方法,显著提高了导航性能。
{"title":"Thinking Before Decision: Efficient Interactive Visual Navigation Based on Local Accessibility Prediction","authors":"Qinrui Liu;Biao Luo;Dongbo Zhang;Renjie Chen","doi":"10.1109/LRA.2024.3522769","DOIUrl":"https://doi.org/10.1109/LRA.2024.3522769","url":null,"abstract":"Embodied AI has made prominent advances in interactive visual navigation tasks based on deep reinforcement learning. In the pursuit of higher success rates in navigation, previous work has typically focused on training embodied agents to push away interactable objects on the ground. However, such interactive visual navigation largely ignores the cost of interacting with the environment and interactions are sometimes counterproductive (e.g., push the obstacle but block the existing path). Considering these scenarios, we develop a efficient interactive visual navigation method. We propose Local Accessibility Prediction (LAP) Module to enable the agent to learn thinking about how the upcoming action will affect the environment and the navigation task before making a decision. Besides, we introduce the interaction penalty term to represent the cost of interacting with the environment. And different interaction penalties are imposed depending on the size of the obstacle pushed away. We introduce the average number of interactions as a new evaluation metric. Also, a two-stage training pipeline is employed to reach better learning performance. Our experiments in AI2-THOR environment show that our method outperforms the baseline in all evaluation metrics, achieving significant improvements in navigation performance.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1688-1695"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142976044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DreamCar: Leveraging Car-Specific Prior for In-the-Wild 3D Car Reconstruction DreamCar:利用汽车特定的先验在野外3D汽车重建
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3523231
Xiaobiao Du;Haiyang Sun;Ming Lu;Tianqing Zhu;Xin Yu
Self-driving industries usually employ professional artists to build exquisite 3D cars. However, it is expensive to craft large-scale digital assets. Since there are already numerous datasets available that contain a vast number of images of cars, we focus on reconstructing high-quality 3D car models from these datasets. However, these datasets only contain one side of cars in the forward-moving scene. We try to use the existing generative models to provide more supervision information, but they struggle to generalize well in cars since they are trained on synthetic datasets not car-specific. In addition, The reconstructed 3D car texture misaligns due to a large error in camera pose estimation when dealing with in-the-wild images. These restrictions make it challenging for previous methods to reconstruct complete 3D cars. To address these problems, we propose a novel method, named DreamCar, which can reconstruct high-quality 3D cars given a few images even a single image. To generalize the generative model, we collect a car dataset, named Car360, with over 5,600 vehicles. With this dataset, we make the generative model more robust to cars. We use this generative prior specific to the car to guide its reconstruction via Score Distillation Sampling. To further complement the supervision information, we utilize the geometric and appearance symmetry of cars. Finally, we propose a pose optimization method that rectifies poses to tackle texture misalignment. Extensive experiments demonstrate that our method significantly outperforms existing methods in reconstructing high-quality 3D cars.
自动驾驶行业通常会雇佣专业艺术家来制造精致的3D汽车。然而,制作大规模数字资产是昂贵的。由于已经有许多数据集包含大量的汽车图像,我们专注于从这些数据集重建高质量的3D汽车模型。然而,这些数据集只包含前进场景中汽车的一面。我们尝试使用现有的生成模型来提供更多的监督信息,但是它们很难很好地泛化汽车,因为它们是在合成数据集上训练的,而不是特定于汽车的数据集。此外,在处理野外图像时,由于相机姿态估计误差较大,重建的3D汽车纹理会出现不对齐。这些限制使得以前重建完整的3D汽车的方法具有挑战性。为了解决这些问题,我们提出了一种名为DreamCar的新方法,该方法可以在给定少量图像甚至单个图像的情况下重建高质量的3D汽车。为了推广生成模型,我们收集了一个名为Car360的汽车数据集,其中有超过5600辆汽车。有了这个数据集,我们使生成模型对汽车更具鲁棒性。我们使用这种特定于汽车的生成先验,通过分数蒸馏采样来指导其重建。为了进一步补充监督信息,我们利用了汽车的几何和外观对称性。最后,提出了一种姿态优化方法,通过姿态校正来解决纹理错位问题。大量的实验表明,我们的方法在重建高质量的3D汽车方面明显优于现有的方法。
{"title":"DreamCar: Leveraging Car-Specific Prior for In-the-Wild 3D Car Reconstruction","authors":"Xiaobiao Du;Haiyang Sun;Ming Lu;Tianqing Zhu;Xin Yu","doi":"10.1109/LRA.2024.3523231","DOIUrl":"https://doi.org/10.1109/LRA.2024.3523231","url":null,"abstract":"Self-driving industries usually employ professional artists to build exquisite 3D cars. However, it is expensive to craft large-scale digital assets. Since there are already numerous datasets available that contain a vast number of images of cars, we focus on reconstructing high-quality 3D car models from these datasets. However, these datasets only contain one side of cars in the forward-moving scene. We try to use the existing generative models to provide more supervision information, but they struggle to generalize well in cars since they are trained on synthetic datasets not car-specific. In addition, The reconstructed 3D car texture misaligns due to a large error in camera pose estimation when dealing with in-the-wild images. These restrictions make it challenging for previous methods to reconstruct complete 3D cars. To address these problems, we propose a novel method, named DreamCar, which can reconstruct high-quality 3D cars given a few images even a single image. To generalize the generative model, we collect a car dataset, named Car360, with over 5,600 vehicles. With this dataset, we make the generative model more robust to cars. We use this generative prior specific to the car to guide its reconstruction via Score Distillation Sampling. To further complement the supervision information, we utilize the geometric and appearance symmetry of cars. Finally, we propose a pose optimization method that rectifies poses to tackle texture misalignment. Extensive experiments demonstrate that our method significantly outperforms existing methods in reconstructing high-quality 3D cars.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1840-1847"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Planning Human-Robot Co-Manipulation With Human Motor Control Objectives and Multi-Component Reaching Strategies 基于人运动控制目标和多组分到达策略的人机协同操作规划
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3522760
Kevin Haninger;Luka Peternel
For successful goal-directed human-robot interaction, the robot should adapt to the intentions and actions of the collaborating human. This can be supported by musculoskeletal or data-driven human models, where the former are limited to lower-level functioning such as ergonomics, and the latter have limited generalizability or data efficiency. What is missing, is the inclusion of human motor control models that can provide generalizable human behavior estimates and integrate into robot planning methods. We use well-studied models from human motor control based on the speed-accuracy and cost-benefit trade-offs to plan collaborative robot motions. In these models, the human trajectory minimizes an objective function, a formulation we adapt to numerical trajectory optimization. This can then be extended with constraints and new variables to realize collaborative motion planning and goal estimation. We deploy this model, as well as a multi-component movement strategy, in physical collaboration with uncertain goal-reaching and synchronized motion tasks, showing the ability of the approach to produce human-like trajectories over a range of conditions.
为了实现成功的目标导向人机交互,机器人应该适应协作人的意图和行动。这可以通过肌肉骨骼或数据驱动的人体模型来支持,其中前者仅限于较低级别的功能,如人体工程学,而后者具有有限的通用性或数据效率。缺少的是包含人类运动控制模型,这些模型可以提供可推广的人类行为估计并集成到机器人规划方法中。我们使用基于速度-精度和成本-效益权衡的人类运动控制的充分研究模型来规划协作机器人运动。在这些模型中,人类轨迹最小化了一个目标函数,我们采用了一个公式来进行数值轨迹优化。然后可以将其扩展为约束和新变量,以实现协同运动规划和目标估计。我们部署了该模型,以及多组件运动策略,在不确定目标到达和同步运动任务的物理协作中,显示了该方法在一系列条件下产生类似人类轨迹的能力。
{"title":"Planning Human-Robot Co-Manipulation With Human Motor Control Objectives and Multi-Component Reaching Strategies","authors":"Kevin Haninger;Luka Peternel","doi":"10.1109/LRA.2024.3522760","DOIUrl":"https://doi.org/10.1109/LRA.2024.3522760","url":null,"abstract":"For successful goal-directed human-robot interaction, the robot should adapt to the intentions and actions of the collaborating human. This can be supported by musculoskeletal or data-driven human models, where the former are limited to lower-level functioning such as ergonomics, and the latter have limited generalizability or data efficiency. What is missing, is the inclusion of human motor control models that can provide generalizable human behavior estimates and integrate into robot planning methods. We use well-studied models from human motor control based on the speed-accuracy and cost-benefit trade-offs to plan collaborative robot motions. In these models, the human trajectory minimizes an objective function, a formulation we adapt to numerical trajectory optimization. This can then be extended with constraints and new variables to realize collaborative motion planning and goal estimation. We deploy this model, as well as a multi-component movement strategy, in physical collaboration with uncertain goal-reaching and synchronized motion tasks, showing the ability of the approach to produce human-like trajectories over a range of conditions.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1433-1440"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142918352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TripletLoc: One-Shot Global Localization Using Semantic Triplet in Urban Environments TripletLoc:在城市环境中使用语义三重体的一次性全局定位
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3523228
Weixin Ma;Huan Yin;Patricia J. Y. Wong;Danwei Wang;Yuxiang Sun;Zhongqing Su
This study presents a system, TripletLoc, for fast and robust global registration of a single LiDAR scan to a large-scale reference map. In contrast to conventional methods using place recognition and point cloud registration, TripletLoc directly generates correspondences on lightweight semantics, which is close to how humans perceive the world. Specifically, TripletLoc first respectively extracts instances from the single query scan and the large-scale reference map to construct two semantic graphs. Then, a novel semantic triplet-based histogram descriptor is designed to achieve instance-level matching between the query scan and the reference map. Graph-theoretic outlier pruning is leveraged to obtain inlier correspondences from raw instance-to-instance correspondences for robust 6-DoF pose estimation. In addition, a novel Road Surface Normal (RSN) map is proposed to provide a prior rotation constraint to further enhance pose estimation. We evaluate TripletLoc extensively on a large-scale public dataset, HeliPR, which covers diverse and complex scenarios in urban environments. Experimental results demonstrate that TripletLoc could achieve fast and robust global localization under diverse and challenging environments, with high memory efficiency.
本研究提出了一个系统,TripletLoc,用于快速和强大的单激光雷达扫描到大比照参考地图的全球配准。与使用位置识别和点云配准的传统方法相比,TripletLoc直接在轻量级语义上生成对应,这接近于人类感知世界的方式。具体来说,TripletLoc首先分别从单个查询扫描和大规模参考映射中提取实例,构建两个语义图。然后,设计了一种新的基于语义三元组的直方图描述符,实现查询扫描与参考映射之间的实例级匹配。利用图论的离群值修剪从原始实例到实例的对应中获得更早的对应,用于稳健的6自由度姿态估计。此外,提出了一种新的道路表面法线(RSN)地图,以提供先验旋转约束,进一步增强姿态估计。我们在大型公共数据集HeliPR上对TripletLoc进行了广泛的评估,该数据集涵盖了城市环境中多种复杂的场景。实验结果表明,TripletLoc能够在多样化和具有挑战性的环境下实现快速、鲁棒的全局定位,并具有较高的存储效率。
{"title":"TripletLoc: One-Shot Global Localization Using Semantic Triplet in Urban Environments","authors":"Weixin Ma;Huan Yin;Patricia J. Y. Wong;Danwei Wang;Yuxiang Sun;Zhongqing Su","doi":"10.1109/LRA.2024.3523228","DOIUrl":"https://doi.org/10.1109/LRA.2024.3523228","url":null,"abstract":"This study presents a system, TripletLoc, for fast and robust global registration of a single LiDAR scan to a large-scale reference map. In contrast to conventional methods using place recognition and point cloud registration, TripletLoc directly generates correspondences on lightweight semantics, which is close to how humans perceive the world. Specifically, TripletLoc first respectively extracts instances from the single query scan and the large-scale reference map to construct two semantic graphs. Then, a novel semantic triplet-based histogram descriptor is designed to achieve instance-level matching between the query scan and the reference map. Graph-theoretic outlier pruning is leveraged to obtain inlier correspondences from raw instance-to-instance correspondences for robust 6-DoF pose estimation. In addition, a novel Road Surface Normal (RSN) map is proposed to provide a prior rotation constraint to further enhance pose estimation. We evaluate TripletLoc extensively on a large-scale public dataset, HeliPR, which covers diverse and complex scenarios in urban environments. Experimental results demonstrate that TripletLoc could achieve fast and robust global localization under diverse and challenging environments, with high memory efficiency.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1569-1576"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142937836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LI-GS: Gaussian Splatting With LiDAR Incorporated for Accurate Large-Scale Reconstruction LI-GS:高斯溅射与激光雷达结合用于精确的大规模重建
IF 4.6 2区 计算机科学 Q2 ROBOTICS Pub Date : 2024-12-26 DOI: 10.1109/LRA.2024.3522846
Changjian Jiang;Ruilan Gao;Kele Shao;Yue Wang;Rong Xiong;Yu Zhang
Large-scale 3D reconstruction is critical in the field of robotics, and the potential of 3D Gaussian Splatting (3DGS) for achieving accurate object-level reconstruction has been demonstrated. However, ensuring geometric accuracy in outdoor and unbounded scenes remains a significant challenge. This study introduces LI-GS, a reconstruction system that incorporates LiDAR and Gaussian Splatting to enhance geometric accuracy in large-scale scenes. 2D Gaussain surfels are employed as the map representation to enhance surface alignment. Additionally, a novel modeling method is proposed to convert LiDAR point clouds to plane-constrained multimodal Gaussian Mixture Models (GMMs). The GMMs are utilized during both initialization and optimization stages to ensure sufficient and continuous supervision over the entire scene while mitigating the risk of over-fitting. Furthermore, GMMs are employed in mesh extraction to eliminate artifacts and improve the overall geometric quality. Experiments demonstrate that our method outperforms state-of-the-art methods in large-scale 3D reconstruction, achieving higher accuracy compared to both LiDAR-based methods and Gaussian-based methods with improvements of 52.6% and 68.7%, respectively.
大规模三维重建在机器人领域至关重要,而三维高斯拼接(3DGS)在实现精确对象级重建方面的潜力已得到证实。然而,在室外和无边界场景中确保几何精度仍然是一项重大挑战。本研究介绍了一种结合了激光雷达和高斯拼接技术的重建系统--LI-GS,以提高大尺度场景中的几何精度。采用二维高斯曲面作为地图表示,以增强曲面对齐。此外,还提出了一种新颖的建模方法,将激光雷达点云转换为平面约束多模态高斯混合模型(GMM)。在初始化和优化阶段都使用了 GMM,以确保对整个场景进行充分、持续的监督,同时降低过度拟合的风险。此外,GMMs 还被用于网格提取,以消除伪影并提高整体几何质量。实验证明,在大规模三维重建中,我们的方法优于最先进的方法,与基于激光雷达的方法和基于高斯的方法相比,精度分别提高了 52.6% 和 68.7%。
{"title":"LI-GS: Gaussian Splatting With LiDAR Incorporated for Accurate Large-Scale Reconstruction","authors":"Changjian Jiang;Ruilan Gao;Kele Shao;Yue Wang;Rong Xiong;Yu Zhang","doi":"10.1109/LRA.2024.3522846","DOIUrl":"https://doi.org/10.1109/LRA.2024.3522846","url":null,"abstract":"Large-scale 3D reconstruction is critical in the field of robotics, and the potential of 3D Gaussian Splatting (3DGS) for achieving accurate object-level reconstruction has been demonstrated. However, ensuring geometric accuracy in outdoor and unbounded scenes remains a significant challenge. This study introduces LI-GS, a reconstruction system that incorporates LiDAR and Gaussian Splatting to enhance geometric accuracy in large-scale scenes. 2D Gaussain surfels are employed as the map representation to enhance surface alignment. Additionally, a novel modeling method is proposed to convert LiDAR point clouds to plane-constrained multimodal Gaussian Mixture Models (GMMs). The GMMs are utilized during both initialization and optimization stages to ensure sufficient and continuous supervision over the entire scene while mitigating the risk of over-fitting. Furthermore, GMMs are employed in mesh extraction to eliminate artifacts and improve the overall geometric quality. Experiments demonstrate that our method outperforms state-of-the-art methods in large-scale 3D reconstruction, achieving higher accuracy compared to both LiDAR-based methods and Gaussian-based methods with improvements of 52.6% and 68.7%, respectively.","PeriodicalId":13241,"journal":{"name":"IEEE Robotics and Automation Letters","volume":"10 2","pages":"1864-1871"},"PeriodicalIF":4.6,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142975982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Robotics and Automation Letters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1