首页 > 最新文献

2013 IEEE Workshop on Robot Vision (WORV)最新文献

英文 中文
Segment-based robotic mapping in dynamic environments 动态环境中基于分段的机器人映射
Pub Date : 2013-05-30 DOI: 10.1109/WORV.2013.6521913
Ross T. Creed, R. Lakaemper
This paper introduces a dynamic mapping algorithm based on line segments. The use of higher level geometric features allows for fast and robust identification of inconsistencies between incoming sensor data and an existing robotic map. Handling of these inconsistencies using a partial-segment likelihood measure produces a system for robot mapping that evolves with the changing features of a dynamic environment. The algorithm is tested in a large scale simulation of a storage logistics center, a real world office environment, and compared against the current state of the art.
介绍了一种基于线段的动态映射算法。使用更高层次的几何特征可以快速和稳健地识别输入传感器数据与现有机器人地图之间的不一致性。使用部分段似然度量来处理这些不一致产生了一个机器人映射系统,该系统随着动态环境的变化特征而发展。该算法在仓储物流中心的大规模模拟中进行了测试,这是一个真实的办公环境,并与目前的技术水平进行了比较。
{"title":"Segment-based robotic mapping in dynamic environments","authors":"Ross T. Creed, R. Lakaemper","doi":"10.1109/WORV.2013.6521913","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521913","url":null,"abstract":"This paper introduces a dynamic mapping algorithm based on line segments. The use of higher level geometric features allows for fast and robust identification of inconsistencies between incoming sensor data and an existing robotic map. Handling of these inconsistencies using a partial-segment likelihood measure produces a system for robot mapping that evolves with the changing features of a dynamic environment. The algorithm is tested in a large scale simulation of a storage logistics center, a real world office environment, and compared against the current state of the art.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127152008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Why would i want a gyroscope on my RGB-D sensor? 我为什么要在我的RGB-D传感器上安装陀螺仪?
Pub Date : 2013-01-01 DOI: 10.1109/WORV.2013.6521916
H. Ovrén, Per-Erik Forssén, D. Tornqvist
Many RGB-D sensors, e.g. the Microsoft Kinect, use rolling shutter cameras. Such cameras produce geometrically distorted images when the sensor is moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans. We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor. We examine the effectiveness of our rectification scheme by coupling it with the the Kinect Fusion algorithm. By comparing Kinect Fusion models obtained from raw sensor scans and from rectified scans, we demonstrate improvement for three classes of sensor motion: panning motions causes slant distortions, and tilt motions cause vertically elongated or compressed objects. For wobble we also observe a loss of detail, compared to the reconstruction using rectified depth scans. As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.
许多RGB-D传感器,例如微软的Kinect,使用滚动快门相机。当传感器移动时,这种相机会产生几何扭曲的图像。为了减轻这些滚动快门失真,我们提出了一种使用附加陀螺仪校正深度扫描的方法。我们还提出了一种简单的方案来校准陀螺仪和滚动快门RGB-D传感器之间的相对姿态和时间同步。我们通过将校正方案与Kinect融合算法相结合来检验校正方案的有效性。通过比较从原始传感器扫描和从校正扫描获得的Kinect融合模型,我们展示了三类传感器运动的改进:平移运动导致倾斜扭曲,倾斜运动导致垂直拉长或压缩物体。与使用整流深度扫描重建相比,对于摆动,我们还观察到细节的损失。由于我们的方法依赖于陀螺仪读数,与运行Kinect Fusion的成本相比,所需的计算量可以忽略不计。
{"title":"Why would i want a gyroscope on my RGB-D sensor?","authors":"H. Ovrén, Per-Erik Forssén, D. Tornqvist","doi":"10.1109/WORV.2013.6521916","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521916","url":null,"abstract":"Many RGB-D sensors, e.g. the Microsoft Kinect, use rolling shutter cameras. Such cameras produce geometrically distorted images when the sensor is moving. To mitigate these rolling shutter distortions we propose a method that uses an attached gyroscope to rectify the depth scans. We also present a simple scheme to calibrate the relative pose and time synchronization between the gyro and a rolling shutter RGB-D sensor. We examine the effectiveness of our rectification scheme by coupling it with the the Kinect Fusion algorithm. By comparing Kinect Fusion models obtained from raw sensor scans and from rectified scans, we demonstrate improvement for three classes of sensor motion: panning motions causes slant distortions, and tilt motions cause vertically elongated or compressed objects. For wobble we also observe a loss of detail, compared to the reconstruction using rectified depth scans. As our method relies on gyroscope readings, the amount of computations required is negligible compared to the cost of running Kinect Fusion.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130861268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Visual-inertial navigation with guaranteed convergence 具有保证收敛性的视觉惯性导航
Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521930
F. Di Corato, M. Innocenti, L. Pollini
This contribution presents a constraints-based loosely-coupled Augmented Implicit Kalman Filter approach to vision-aided inertial navigation that uses epipolar constraints as output map. The proposed approach is capable of estimating the standard navigation output (velocity, position and attitude) together with inertial sensor biases. An observability analysis is proposed in order to define the motion requirements for full observability of the system and asymptotic convergence of the parameter estimations. Simulations are presented to support the theoretical conclusions.
这一贡献提出了一种基于约束的松耦合增强隐式卡尔曼滤波方法用于视觉辅助惯性导航,该方法使用极外约束作为输出映射。该方法能够估计标准导航输出(速度、位置和姿态)以及惯性传感器偏差。为了确定系统完全可观察性和参数估计渐近收敛的运动要求,提出了可观察性分析方法。仿真结果支持了理论结论。
{"title":"Visual-inertial navigation with guaranteed convergence","authors":"F. Di Corato, M. Innocenti, L. Pollini","doi":"10.1109/WORV.2013.6521930","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521930","url":null,"abstract":"This contribution presents a constraints-based loosely-coupled Augmented Implicit Kalman Filter approach to vision-aided inertial navigation that uses epipolar constraints as output map. The proposed approach is capable of estimating the standard navigation output (velocity, position and attitude) together with inertial sensor biases. An observability analysis is proposed in order to define the motion requirements for full observability of the system and asymptotic convergence of the parameter estimations. Simulations are presented to support the theoretical conclusions.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122249037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Clustering of image features based on contact and occlusion among robot body and objects 基于机器人身体和物体之间的接触和遮挡的图像特征聚类
Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521939
T. Somei, Y. Kobayashi, A. Shimizu, T. Kaneko
This paper presents a recognition framework for a robot without predefined knowledge on its environment. Image features (keypoints) are clustered based on statistical dependencies with respect to their motions and occlusions. Estimation of conditional probability is used to evaluate statistical dependencies among configuration of robot and features in images. Features that move depending on the configuration of the robot can be regarded as part of robot's body. Different kinds of occlusion can happen depending on relative position of robot hand and objects. Those differences can be expressed as different structures of `dependency network' in the proposed framework. The proposed recognition was verified by experiment using a humanoid robot equipped with camera and arm. It was first confirmed that part of the robot body was autonomously extracted without any a priori knowledge using conditional probability. In the generation of dependency network, different structures of networks were constructed depending on position of the robot hand relative to an object.
本文提出了一种机器人对环境无预定义知识的识别框架。图像特征(关键点)基于它们的运动和遮挡的统计依赖关系聚类。使用条件概率估计来评估机器人结构与图像特征之间的统计依赖关系。根据机器人的结构而移动的特征可以看作是机器人身体的一部分。不同类型的遮挡会根据机器人手和物体的相对位置而发生。这些差异可以表示为拟议框架中“依赖网络”的不同结构。利用带摄像头和手臂的仿人机器人进行了实验验证。首先利用条件概率法,在没有任何先验知识的情况下,自主提取了机器人身体的一部分。在依赖网络的生成中,根据机器人手相对于物体的位置,构建不同的网络结构。
{"title":"Clustering of image features based on contact and occlusion among robot body and objects","authors":"T. Somei, Y. Kobayashi, A. Shimizu, T. Kaneko","doi":"10.1109/WORV.2013.6521939","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521939","url":null,"abstract":"This paper presents a recognition framework for a robot without predefined knowledge on its environment. Image features (keypoints) are clustered based on statistical dependencies with respect to their motions and occlusions. Estimation of conditional probability is used to evaluate statistical dependencies among configuration of robot and features in images. Features that move depending on the configuration of the robot can be regarded as part of robot's body. Different kinds of occlusion can happen depending on relative position of robot hand and objects. Those differences can be expressed as different structures of `dependency network' in the proposed framework. The proposed recognition was verified by experiment using a humanoid robot equipped with camera and arm. It was first confirmed that part of the robot body was autonomously extracted without any a priori knowledge using conditional probability. In the generation of dependency network, different structures of networks were constructed depending on position of the robot hand relative to an object.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133906389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
RMSD: A 3D real-time mid-level scene description system RMSD:一个三维实时中层场景描述系统
Pub Date : 1900-01-01 DOI: 10.1007/978-3-662-43859-6_2
K. Georgiev, R. Lakaemper
{"title":"RMSD: A 3D real-time mid-level scene description system","authors":"K. Georgiev, R. Lakaemper","doi":"10.1007/978-3-662-43859-6_2","DOIUrl":"https://doi.org/10.1007/978-3-662-43859-6_2","url":null,"abstract":"","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123935482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor 实时障碍物检测和避免在镜面的存在使用一个主动的3D传感器
Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521938
B. Peasley, Stan Birchfield
This paper proposes a novel approach to obstacle detection and avoidance using a 3D sensor. We depart from the approach of previous researchers who use depth images from 3D sensors projected onto UV-disparity to detect obstacles. Instead, our approach relies on projecting 3D points onto the ground plane, which is estimated during a calibration step. A 2D occupancy map is then used to determine the presence of obstacles, from which translation and rotation velocities are computed to avoid the obstacles. Two innovations are introduced to overcome the limitations of the sensor: An infinite pole approach is proposed to hypothesize infinitely tall, thin obstacles when the sensor yields invalid readings, and a control strategy is adopted to turn the robot away from scenes that yield a high percentage of invalid readings. Together, these extensions enable the system to overcome the inherent limitations of the sensor. Experiments in a variety of environments, including dynamic objects, obstacles of varying heights, and dimly-lit conditions, show the ability of the system to perform robust obstacle avoidance in real time under realistic indoor conditions.
本文提出了一种利用三维传感器进行障碍物检测和避障的新方法。我们不同于以前的研究人员使用投射到紫外线上的3D传感器的深度图像来检测障碍物的方法。相反,我们的方法依赖于将3D点投影到地平面上,这是在校准步骤中估计的。然后使用2D占用图来确定障碍物的存在,由此计算平移和旋转速度以避开障碍物。为了克服传感器的局限性,引入了两项创新:提出了一种无限极方法,当传感器产生无效读数时,可以假设无限高、无限薄的障碍物,并采用一种控制策略,使机器人远离产生高比例无效读数的场景。总之,这些扩展使系统能够克服传感器的固有限制。在各种环境下进行的实验,包括动态物体、不同高度的障碍物和昏暗的光线条件,显示了该系统在现实室内条件下实时执行鲁棒避障的能力。
{"title":"Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3D sensor","authors":"B. Peasley, Stan Birchfield","doi":"10.1109/WORV.2013.6521938","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521938","url":null,"abstract":"This paper proposes a novel approach to obstacle detection and avoidance using a 3D sensor. We depart from the approach of previous researchers who use depth images from 3D sensors projected onto UV-disparity to detect obstacles. Instead, our approach relies on projecting 3D points onto the ground plane, which is estimated during a calibration step. A 2D occupancy map is then used to determine the presence of obstacles, from which translation and rotation velocities are computed to avoid the obstacles. Two innovations are introduced to overcome the limitations of the sensor: An infinite pole approach is proposed to hypothesize infinitely tall, thin obstacles when the sensor yields invalid readings, and a control strategy is adopted to turn the robot away from scenes that yield a high percentage of invalid readings. Together, these extensions enable the system to overcome the inherent limitations of the sensor. Experiments in a variety of environments, including dynamic objects, obstacles of varying heights, and dimly-lit conditions, show the ability of the system to perform robust obstacle avoidance in real time under realistic indoor conditions.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123506605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 44
Probabilistic analysis of incremental light bundle adjustment 增量光束调整的概率分析
Pub Date : 1900-01-01 DOI: 10.1109/WORV.2013.6521942
Vadim Indelman, Richard Roberts, F. Dellaert
This paper presents a probabilistic analysis of the recently introduced incremental light bundle adjustment method (iLBA) [6]. In iLBA, the observed 3D points are algebraically eliminated, resulting in a cost function with only the camera poses as variables, and an incremental smoothing technique is applied for efficiently processing incoming images. While we have already showed that compared to conventional bundle adjustment (BA), iLBA yields a significant improvement in computational complexity with similar levels of accuracy, the probabilistic properties of iLBA have not been analyzed thus far. In this paper we consider the probability distribution that corresponds to the iLBA cost function, and analyze how well it represents the true density of the camera poses given the image measurements. The latter can be exactly calculated in bundle adjustment (BA) by marginalizing out the 3D points from the joint distribution of camera poses and 3D points. We present a theoretical analysis of the differences in the way that LBA and BA use measurement information. Using indoor and outdoor datasets we show that the first two moments of the iLBA and the true probability distributions are very similar in practice.
本文对最近提出的增量光束调整方法(iLBA)[6]进行了概率分析。在iLBA中,对观察到的3D点进行代数消除,得到一个仅以相机姿态为变量的代价函数,并采用增量平滑技术对传入图像进行有效处理。虽然我们已经表明,与传统的束调整(BA)相比,iLBA在计算复杂性方面有了显着提高,并且具有相似的精度水平,但iLBA的概率特性迄今尚未得到分析。在本文中,我们考虑对应于iLBA成本函数的概率分布,并分析它在给定图像测量的情况下如何很好地表示相机姿势的真实密度。后者可以在束平差(BA)中通过从相机姿态和3D点的联合分布中剔除3D点来精确计算。我们对LBA和BA使用测量信息的方式的差异进行了理论分析。使用室内和室外数据集,我们表明在实践中,iLBA的前两个矩和真实概率分布非常相似。
{"title":"Probabilistic analysis of incremental light bundle adjustment","authors":"Vadim Indelman, Richard Roberts, F. Dellaert","doi":"10.1109/WORV.2013.6521942","DOIUrl":"https://doi.org/10.1109/WORV.2013.6521942","url":null,"abstract":"This paper presents a probabilistic analysis of the recently introduced incremental light bundle adjustment method (iLBA) [6]. In iLBA, the observed 3D points are algebraically eliminated, resulting in a cost function with only the camera poses as variables, and an incremental smoothing technique is applied for efficiently processing incoming images. While we have already showed that compared to conventional bundle adjustment (BA), iLBA yields a significant improvement in computational complexity with similar levels of accuracy, the probabilistic properties of iLBA have not been analyzed thus far. In this paper we consider the probability distribution that corresponds to the iLBA cost function, and analyze how well it represents the true density of the camera poses given the image measurements. The latter can be exactly calculated in bundle adjustment (BA) by marginalizing out the 3D points from the joint distribution of camera poses and 3D points. We present a theoretical analysis of the differences in the way that LBA and BA use measurement information. Using indoor and outdoor datasets we show that the first two moments of the iLBA and the true probability distributions are very similar in practice.","PeriodicalId":130461,"journal":{"name":"2013 IEEE Workshop on Robot Vision (WORV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134196796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2013 IEEE Workshop on Robot Vision (WORV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1