首页 > 最新文献

Robotics: Science and Systems XVIII最新文献

英文 中文
Learning Interpretable, High-Performing Policies for Autonomous Driving 学习可解释的、高性能的自动驾驶政策
Pub Date : 2022-02-04 DOI: 10.15607/rss.2022.xviii.068
Rohan R. Paleja, Yaru Niu, Andrew Silva, Chace Ritchie, Sugju Choi, M. Gombolay
Gradient-based approaches in reinforcement learning (RL) have achieved tremendous success in learning policies for autonomous vehicles. While the performance of these approaches warrants real-world adoption, these policies lack interpretability, limiting deployability in the safety-critical and legally-regulated domain of autonomous driving (AD). AD requires interpretable and verifiable control policies that maintain high performance. We propose Interpretable Continuous Control Trees (ICCTs), a tree-based model that can be optimized via modern, gradient-based, RL approaches to produce high-performing, interpretable policies. The key to our approach is a procedure for allowing direct optimization in a sparse decision-tree-like representation. We validate ICCTs against baselines across six domains, showing that ICCTs are capable of learning interpretable policy representations that parity or outperform baselines by up to 33% in AD scenarios while achieving a 300x-600x reduction in the number of policy parameters against deep learning baselines. Furthermore, we demonstrate the interpretability and utility of our ICCTs through a 14-car physical robot demonstration.
基于梯度的强化学习方法在自动驾驶汽车的策略学习中取得了巨大的成功。虽然这些方法的性能保证了实际应用,但这些策略缺乏可解释性,限制了在安全关键和法律监管的自动驾驶(AD)领域的部署。AD需要可解释和可验证的控制策略,以保持高性能。我们提出了可解释的连续控制树(icct),这是一种基于树的模型,可以通过现代的、基于梯度的RL方法进行优化,以产生高性能的、可解释的策略。我们方法的关键是允许在稀疏决策树表示中直接优化的过程。我们在六个领域的基线上验证了icct,表明icct能够学习可解释的策略表示,在AD场景中,这些策略表示与基线相同或优于基线高达33%,同时在深度学习基线上实现了策略参数数量减少300 -600倍。此外,我们通过14辆物理机器人演示展示了icct的可解释性和实用性。
{"title":"Learning Interpretable, High-Performing Policies for Autonomous Driving","authors":"Rohan R. Paleja, Yaru Niu, Andrew Silva, Chace Ritchie, Sugju Choi, M. Gombolay","doi":"10.15607/rss.2022.xviii.068","DOIUrl":"https://doi.org/10.15607/rss.2022.xviii.068","url":null,"abstract":"Gradient-based approaches in reinforcement learning (RL) have achieved tremendous success in learning policies for autonomous vehicles. While the performance of these approaches warrants real-world adoption, these policies lack interpretability, limiting deployability in the safety-critical and legally-regulated domain of autonomous driving (AD). AD requires interpretable and verifiable control policies that maintain high performance. We propose Interpretable Continuous Control Trees (ICCTs), a tree-based model that can be optimized via modern, gradient-based, RL approaches to produce high-performing, interpretable policies. The key to our approach is a procedure for allowing direct optimization in a sparse decision-tree-like representation. We validate ICCTs against baselines across six domains, showing that ICCTs are capable of learning interpretable policy representations that parity or outperform baselines by up to 33% in AD scenarios while achieving a 300x-600x reduction in the number of policy parameters against deep learning baselines. Furthermore, we demonstrate the interpretability and utility of our ICCTs through a 14-car physical robot demonstration.","PeriodicalId":340265,"journal":{"name":"Robotics: Science and Systems XVIII","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122121632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Hydra: A Real-time Spatial Perception System for 3D Scene Graph Construction and Optimization Hydra:用于三维场景图形构建和优化的实时空间感知系统
Pub Date : 2022-01-31 DOI: 10.15607/rss.2022.xviii.050
Nathan Hughes, Yun Chang, L. Carlone
3D scene graphs have recently emerged as a powerful high-level representation of 3D environments. A 3D scene graph describes the environment as a layered graph where nodes represent spatial concepts at multiple levels of abstraction and edges represent relations between concepts. While 3D scene graphs can serve as an advanced"mental model"for robots, how to build such a rich representation in real-time is still uncharted territory. This paper describes a real-time Spatial Perception System, a suite of algorithms to build a 3D scene graph from sensor data in real-time. Our first contribution is to develop real-time algorithms to incrementally construct the layers of a scene graph as the robot explores the environment; these algorithms build a local Euclidean Signed Distance Function (ESDF) around the current robot location, extract a topological map of places from the ESDF, and then segment the places into rooms using an approach inspired by community-detection techniques. Our second contribution is to investigate loop closure detection and optimization in 3D scene graphs. We show that 3D scene graphs allow defining hierarchical descriptors for loop closure detection; our descriptors capture statistics across layers in the scene graph, ranging from low-level visual appearance to summary statistics about objects and places. We then propose the first algorithm to optimize a 3D scene graph in response to loop closures; our approach relies on embedded deformation graphs to simultaneously correct all layers of the scene graph. We implement the proposed Spatial Perception System into a architecture named Hydra, that combines fast early and mid-level perception processes with slower high-level perception. We evaluate Hydra on simulated and real data and show it is able to reconstruct 3D scene graphs with an accuracy comparable with batch offline methods despite running online.
3D场景图最近作为3D环境的一种强大的高级表示而出现。3D场景图将环境描述为分层图,其中节点表示多个抽象层次的空间概念,边缘表示概念之间的关系。虽然3D场景图可以作为机器人的高级“心理模型”,但如何实时构建如此丰富的表示仍然是一个未知的领域。本文描述了一个实时空间感知系统,这是一套从传感器数据实时构建三维场景图的算法。我们的第一个贡献是开发实时算法,以便在机器人探索环境时逐步构建场景图的层;这些算法围绕当前机器人位置建立一个局部欧几里得签名距离函数(ESDF),从ESDF中提取地方的拓扑地图,然后使用受社区检测技术启发的方法将这些地方分割成房间。我们的第二个贡献是研究3D场景图中的闭环检测和优化。我们展示了3D场景图允许定义循环闭合检测的分层描述符;我们的描述符捕获场景图中跨层的统计信息,从低级的视觉外观到关于对象和地点的汇总统计信息。然后,我们提出了第一种算法来优化响应闭环的3D场景图;我们的方法依赖于嵌入的变形图来同时校正场景图的所有层。我们将提出的空间感知系统实现到一个名为Hydra的架构中,该架构将快速的早期和中级感知过程与较慢的高级感知过程相结合。我们在模拟和真实数据上对Hydra进行了评估,结果表明,尽管在线运行,它仍能够以与批量离线方法相当的精度重建3D场景图。
{"title":"Hydra: A Real-time Spatial Perception System for 3D Scene Graph Construction and Optimization","authors":"Nathan Hughes, Yun Chang, L. Carlone","doi":"10.15607/rss.2022.xviii.050","DOIUrl":"https://doi.org/10.15607/rss.2022.xviii.050","url":null,"abstract":"3D scene graphs have recently emerged as a powerful high-level representation of 3D environments. A 3D scene graph describes the environment as a layered graph where nodes represent spatial concepts at multiple levels of abstraction and edges represent relations between concepts. While 3D scene graphs can serve as an advanced\"mental model\"for robots, how to build such a rich representation in real-time is still uncharted territory. This paper describes a real-time Spatial Perception System, a suite of algorithms to build a 3D scene graph from sensor data in real-time. Our first contribution is to develop real-time algorithms to incrementally construct the layers of a scene graph as the robot explores the environment; these algorithms build a local Euclidean Signed Distance Function (ESDF) around the current robot location, extract a topological map of places from the ESDF, and then segment the places into rooms using an approach inspired by community-detection techniques. Our second contribution is to investigate loop closure detection and optimization in 3D scene graphs. We show that 3D scene graphs allow defining hierarchical descriptors for loop closure detection; our descriptors capture statistics across layers in the scene graph, ranging from low-level visual appearance to summary statistics about objects and places. We then propose the first algorithm to optimize a 3D scene graph in response to loop closures; our approach relies on embedded deformation graphs to simultaneously correct all layers of the scene graph. We implement the proposed Spatial Perception System into a architecture named Hydra, that combines fast early and mid-level perception processes with slower high-level perception. We evaluate Hydra on simulated and real data and show it is able to reconstruct 3D scene graphs with an accuracy comparable with batch offline methods despite running online.","PeriodicalId":340265,"journal":{"name":"Robotics: Science and Systems XVIII","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129388966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Invariance Through Latent Alignment 通过潜在对齐的不变性
Pub Date : 2021-12-15 DOI: 10.15607/rss.2022.xviii.064
Takuma Yoneda, Ge Yang, Matthew R. Walter, Bradly C. Stadie
A robot's deployment environment often involves perceptual changes that differ from what it has experienced during training. Standard practices such as data augmentation attempt to bridge this gap by augmenting source images in an effort to extend the support of the training distribution to better cover what the agent might experience at test time. In many cases, however, it is impossible to know test-time distribution-shift a priori, making these schemes infeasible. In this paper, we introduce a general approach, called Invariance Through Latent Alignment (ILA), that improves the test-time performance of a visuomotor control policy in deployment environments with unknown perceptual variations. ILA performs unsupervised adaptation at deployment-time by matching the distribution of latent features on the target domain to the agent's prior experience, without relying on paired data. Although simple, we show that this idea leads to surprising improvements on a variety of challenging adaptation scenarios, including changes in lighting conditions, the content in the scene, and camera poses. We present results on calibrated control benchmarks in simulation -- the distractor control suite -- and a physical robot under a sim-to-real setup.
机器人的部署环境通常涉及不同于它在训练期间所经历的感知变化。诸如数据增强之类的标准实践试图通过增强源图像来弥合这一差距,从而努力扩展训练分布的支持,以更好地覆盖代理在测试时可能经历的内容。然而,在许多情况下,不可能先验地知道测试时间分布位移,使得这些方案不可行。在本文中,我们介绍了一种通用的方法,称为通过潜在对齐的不变性(ILA),它提高了视觉运动控制策略在具有未知感知变化的部署环境中的测试时间性能。通过将目标域上潜在特征的分布与智能体先前的经验相匹配,而不依赖于配对数据,ILA在部署时执行无监督自适应。虽然简单,但我们表明,这个想法导致了各种具有挑战性的适应场景的惊人改进,包括照明条件的变化,场景中的内容和相机姿势。我们在模拟中展示了校准控制基准的结果-分心控制套件-以及模拟到真实设置下的物理机器人。
{"title":"Invariance Through Latent Alignment","authors":"Takuma Yoneda, Ge Yang, Matthew R. Walter, Bradly C. Stadie","doi":"10.15607/rss.2022.xviii.064","DOIUrl":"https://doi.org/10.15607/rss.2022.xviii.064","url":null,"abstract":"A robot's deployment environment often involves perceptual changes that differ from what it has experienced during training. Standard practices such as data augmentation attempt to bridge this gap by augmenting source images in an effort to extend the support of the training distribution to better cover what the agent might experience at test time. In many cases, however, it is impossible to know test-time distribution-shift a priori, making these schemes infeasible. In this paper, we introduce a general approach, called Invariance Through Latent Alignment (ILA), that improves the test-time performance of a visuomotor control policy in deployment environments with unknown perceptual variations. ILA performs unsupervised adaptation at deployment-time by matching the distribution of latent features on the target domain to the agent's prior experience, without relying on paired data. Although simple, we show that this idea leads to surprising improvements on a variety of challenging adaptation scenarios, including changes in lighting conditions, the content in the scene, and camera poses. We present results on calibrated control benchmarks in simulation -- the distractor control suite -- and a physical robot under a sim-to-real setup.","PeriodicalId":340265,"journal":{"name":"Robotics: Science and Systems XVIII","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126120906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
TNS: Terrain Traversability Mapping and Navigation System for Autonomous Excavators 自主挖掘机地形可穿越性测绘与导航系统
Pub Date : 2021-09-13 DOI: 10.15607/rss.2022.xviii.049
Tianrui Guan, Zhenpeng He, Ruitao Song, Dinesh Manocha, Liangjun Zhang
We present a terrain traversability mapping and navigation system (TNS) for autonomous excavator applications in an unstructured environment. We use an efficient approach to extract terrain features from RGB images and 3D point clouds and incorporate them into a global map for planning and navigation. Our system can adapt to changing environments and update the terrain information in real-time. Moreover, we present a novel dataset, the Complex Worksite Terrain (CWT) dataset, which consists of RGB images from construction sites with seven categories based on navigability. Our novel algorithms improve the mapping accuracy over previous SOTA methods by 4.17-30.48% and reduce MSE on the traversability map by 13.8-71.4%. We have combined our mapping approach with planning and control modules in an autonomous excavator navigation system and observe 49.3% improvement in the overall success rate. Based on TNS, we demonstrate the first autonomous excavator that can navigate through unstructured environments consisting of deep pits, steep hills, rock piles, and other complex terrain features.
我们提出了一种地形可穿越性测绘和导航系统(TNS),用于非结构化环境下的自主挖掘机应用。我们使用一种有效的方法从RGB图像和3D点云中提取地形特征,并将其合并到全球地图中用于规划和导航。该系统能够适应不断变化的环境,实时更新地形信息。此外,我们提出了一个新的数据集,即复杂工地地形(CWT)数据集,该数据集由来自建筑工地的RGB图像组成,根据可导航性分为七个类别。与以往的SOTA方法相比,我们的算法将映射精度提高了4.17-30.48%,将可遍历性图的MSE降低了13.8-71.4%。我们将测绘方法与自主挖掘机导航系统中的规划和控制模块相结合,整体成功率提高了49.3%。基于TNS,我们展示了第一台可以在由深坑、陡坡、岩桩和其他复杂地形特征组成的非结构化环境中导航的自主挖掘机。
{"title":"TNS: Terrain Traversability Mapping and Navigation System for Autonomous Excavators","authors":"Tianrui Guan, Zhenpeng He, Ruitao Song, Dinesh Manocha, Liangjun Zhang","doi":"10.15607/rss.2022.xviii.049","DOIUrl":"https://doi.org/10.15607/rss.2022.xviii.049","url":null,"abstract":"We present a terrain traversability mapping and navigation system (TNS) for autonomous excavator applications in an unstructured environment. We use an efficient approach to extract terrain features from RGB images and 3D point clouds and incorporate them into a global map for planning and navigation. Our system can adapt to changing environments and update the terrain information in real-time. Moreover, we present a novel dataset, the Complex Worksite Terrain (CWT) dataset, which consists of RGB images from construction sites with seven categories based on navigability. Our novel algorithms improve the mapping accuracy over previous SOTA methods by 4.17-30.48% and reduce MSE on the traversability map by 13.8-71.4%. We have combined our mapping approach with planning and control modules in an autonomous excavator navigation system and observe 49.3% improvement in the overall success rate. Based on TNS, we demonstrate the first autonomous excavator that can navigate through unstructured environments consisting of deep pits, steep hills, rock piles, and other complex terrain features.","PeriodicalId":340265,"journal":{"name":"Robotics: Science and Systems XVIII","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121648092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
FaDIV-Syn: Fast Depth-Independent View Synthesis using Soft Masks and Implicit Blending FaDIV-Syn:使用软蒙版和隐式混合的快速深度独立视图合成
Pub Date : 2021-06-24 DOI: 10.15607/rss.2022.xviii.054
Andre Rochow, Max Schwarz, Michael Weinmann, Sven Behnke
Novel view synthesis is required in many robotic applications, such as VR teleoperation and scene reconstruction. Existing methods are often too slow for these contexts, cannot handle dynamic scenes, and are limited by their explicit depth estimation stage, where incorrect depth predictions can lead to large projection errors. Our proposed method runs in real time on live streaming data and avoids explicit depth estimation by efficiently warping input images into the target frame for a range of assumed depth planes. The resulting plane sweep volume (PSV) is directly fed into our network, which first estimates soft PSV masks in a self-supervised manner, and then directly produces the novel output view. This improves efficiency and performance on transparent, reflective, thin, and feature-less scene parts. FaDIV-Syn can perform both interpolation and extrapolation tasks at 540p in real-time and outperforms state-of-the-art extrapolation methods on the large-scale RealEstate10k dataset. We thoroughly evaluate ablations, such as removing the Soft-Masking network, training from fewer examples as well as generalization to higher resolutions and stronger depth discretization. Our implementation is available.
在许多机器人应用中,如VR远程操作和场景重建,都需要新颖的视图合成。现有的方法对于这些情况往往太慢,不能处理动态场景,并且受其明确的深度估计阶段的限制,其中不正确的深度预测可能导致较大的投影误差。我们提出的方法在实时流数据上运行,并通过有效地将输入图像扭曲到一系列假设深度平面的目标帧中来避免显式的深度估计。得到的平面扫描体积(PSV)直接输入到我们的网络中,该网络首先以自监督的方式估计软PSV掩模,然后直接产生新的输出视图。这提高了透明、反射、薄和无特征场景部分的效率和性能。FaDIV-Syn可以实时执行540p的插值和外推任务,并且在大规模RealEstate10k数据集上优于最先进的外推方法。我们彻底评估了消融,例如去除软屏蔽网络,从更少的示例中进行训练,以及向更高分辨率和更强深度离散化的泛化。我们的实现是可用的。
{"title":"FaDIV-Syn: Fast Depth-Independent View Synthesis using Soft Masks and Implicit Blending","authors":"Andre Rochow, Max Schwarz, Michael Weinmann, Sven Behnke","doi":"10.15607/rss.2022.xviii.054","DOIUrl":"https://doi.org/10.15607/rss.2022.xviii.054","url":null,"abstract":"Novel view synthesis is required in many robotic applications, such as VR teleoperation and scene reconstruction. Existing methods are often too slow for these contexts, cannot handle dynamic scenes, and are limited by their explicit depth estimation stage, where incorrect depth predictions can lead to large projection errors. Our proposed method runs in real time on live streaming data and avoids explicit depth estimation by efficiently warping input images into the target frame for a range of assumed depth planes. The resulting plane sweep volume (PSV) is directly fed into our network, which first estimates soft PSV masks in a self-supervised manner, and then directly produces the novel output view. This improves efficiency and performance on transparent, reflective, thin, and feature-less scene parts. FaDIV-Syn can perform both interpolation and extrapolation tasks at 540p in real-time and outperforms state-of-the-art extrapolation methods on the large-scale RealEstate10k dataset. We thoroughly evaluate ablations, such as removing the Soft-Masking network, training from fewer examples as well as generalization to higher resolutions and stronger depth discretization. Our implementation is available.","PeriodicalId":340265,"journal":{"name":"Robotics: Science and Systems XVIII","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128025785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Robotics: Science and Systems XVIII
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1