首页 > 最新文献

Proceedings Integration of Speech and Image Understanding最新文献

英文 中文
Towards computer vision with description logics: some recent progress 用描述逻辑实现计算机视觉:一些最新进展
Pub Date : 1999-09-21 DOI: 10.1109/ISIU.1999.824868
R. Moller, B. Neumann, Michael Wessel
A description logic (DL) is a knowledge representation formalism which may provide interesting inference services for diverse application areas. This paper first gives an overview of the benefits which a DL may provide for computer vision. The main body of the paper presents recent work at Hamburg University on extending DLs to handle spatial reasoning and default reasoning.
描述逻辑是一种知识表示形式,可以为不同的应用领域提供有趣的推理服务。本文首先概述了深度学习可能为计算机视觉提供的好处。论文的主体介绍了汉堡大学最近在扩展人工智能以处理空间推理和默认推理方面的工作。
{"title":"Towards computer vision with description logics: some recent progress","authors":"R. Moller, B. Neumann, Michael Wessel","doi":"10.1109/ISIU.1999.824868","DOIUrl":"https://doi.org/10.1109/ISIU.1999.824868","url":null,"abstract":"A description logic (DL) is a knowledge representation formalism which may provide interesting inference services for diverse application areas. This paper first gives an overview of the benefits which a DL may provide for computer vision. The main body of the paper presents recent work at Hamburg University on extending DLs to handle spatial reasoning and default reasoning.","PeriodicalId":227256,"journal":{"name":"Proceedings Integration of Speech and Image Understanding","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129319842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Connecting concepts from vision and speech processing 连接视觉和语音处理的概念
Pub Date : 1999-09-21 DOI: 10.1109/ISIU.1999.824829
S. Wachsmuth, G. Sagerer
This paper addresses the problem of how to establish referential links between interpretations of speech and visual data. In order to get rid of erroneous, vague, or incomplete conceptual descriptions, we propose a probabilistic interaction scheme. The modelling of dependencies and the calculation of inferences are realized by using Bayesian networks. This interaction scheme provides a basis for disambiguation and error recovery. We implemented an interaction component in an assembly task environment. A robot constructor can be instructed by speech and pointing gestures in order to connect primitive component parts of a wooden toy construction kit. The system is evaluated on a test data set which consists of 448 spoken utterances from 16 speakers who name objects on 10 images from different scenes. First results show the effectiveness and robustness of the probabilistic approach.
本文讨论了如何在语音和视觉数据的解释之间建立参考联系的问题。为了消除错误、模糊或不完整的概念描述,我们提出了一个概率交互方案。利用贝叶斯网络实现依赖关系的建模和推理的计算。该交互方案为消歧和错误恢复提供了基础。我们在一个组装任务环境中实现了一个交互组件。机器人建造者可以通过语音和手势指示来连接木制玩具建造套件的原始部件。该系统在一个测试数据集上进行评估,该数据集由16位说话者的448个语音组成,这些说话者在来自不同场景的10张图像上命名物体。第一个结果表明了概率方法的有效性和鲁棒性。
{"title":"Connecting concepts from vision and speech processing","authors":"S. Wachsmuth, G. Sagerer","doi":"10.1109/ISIU.1999.824829","DOIUrl":"https://doi.org/10.1109/ISIU.1999.824829","url":null,"abstract":"This paper addresses the problem of how to establish referential links between interpretations of speech and visual data. In order to get rid of erroneous, vague, or incomplete conceptual descriptions, we propose a probabilistic interaction scheme. The modelling of dependencies and the calculation of inferences are realized by using Bayesian networks. This interaction scheme provides a basis for disambiguation and error recovery. We implemented an interaction component in an assembly task environment. A robot constructor can be instructed by speech and pointing gestures in order to connect primitive component parts of a wooden toy construction kit. The system is evaluated on a test data set which consists of 448 spoken utterances from 16 speakers who name objects on 10 images from different scenes. First results show the effectiveness and robustness of the probabilistic approach.","PeriodicalId":227256,"journal":{"name":"Proceedings Integration of Speech and Image Understanding","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122025265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
From images to sentences via spatial relations 通过空间关系从图像到句子
Pub Date : 1999-09-21 DOI: 10.1109/ISIU.1999.824875
A. Abella, J. Kender
This work presents a conceptual framework for representing, manipulating, measuring, and communicating in natural language several ideas about topological (non-metric) spatial locations, object spatial contexts, and user expectations of spatial relationships. It articulates a theory of spatial relations, how they can be represented as fuzzy predicates internally, and how they can be appropriately derived from, imagery; then, how they can be augmented or filtered using prior knowledge, and lastly, how they can produce natural language statements about location and space. This framework quantifies the notions of context and vagueness, so that all spatial relations are measurably accurate, provably efficient, and matched to users' expectations. The work makes explicit two critical heuristics for reducing the complexity of the relationships implicit in imagery, one a general rule for single object descriptions, and the other a general rule for rank ordering object relationships. A derived working system combines variable aspects of computer science and linguistics in such a way so as to be extensible to many environments. The system has been demonstrated both in, a landmark navigation task and in a medical task, two very separate domains, and has been evaluated in both.
这项工作提出了一个概念框架,用于用自然语言表示、操作、测量和交流关于拓扑(非度量)空间位置、对象空间上下文和空间关系的用户期望的几个想法。它阐明了空间关系的理论,它们如何在内部被表示为模糊谓词,以及它们如何从图像中适当地推导出来;然后,如何使用先验知识对它们进行增强或过滤,最后,它们如何生成关于位置和空间的自然语言陈述。这个框架量化了上下文和模糊性的概念,因此所有的空间关系都是可测量的准确,可证明的有效,并符合用户的期望。这项工作明确了两个关键的启发式方法,用于减少图像中隐含的关系的复杂性,一个是单个对象描述的一般规则,另一个是排序对象关系的一般规则。派生的工作系统结合了计算机科学和语言学的可变方面,从而可以扩展到许多环境。该系统已经在地标导航任务和医疗任务这两个非常独立的领域进行了演示,并在这两个领域进行了评估。
{"title":"From images to sentences via spatial relations","authors":"A. Abella, J. Kender","doi":"10.1109/ISIU.1999.824875","DOIUrl":"https://doi.org/10.1109/ISIU.1999.824875","url":null,"abstract":"This work presents a conceptual framework for representing, manipulating, measuring, and communicating in natural language several ideas about topological (non-metric) spatial locations, object spatial contexts, and user expectations of spatial relationships. It articulates a theory of spatial relations, how they can be represented as fuzzy predicates internally, and how they can be appropriately derived from, imagery; then, how they can be augmented or filtered using prior knowledge, and lastly, how they can produce natural language statements about location and space. This framework quantifies the notions of context and vagueness, so that all spatial relations are measurably accurate, provably efficient, and matched to users' expectations. The work makes explicit two critical heuristics for reducing the complexity of the relationships implicit in imagery, one a general rule for single object descriptions, and the other a general rule for rank ordering object relationships. A derived working system combines variable aspects of computer science and linguistics in such a way so as to be extensible to many environments. The system has been demonstrated both in, a landmark navigation task and in a medical task, two very separate domains, and has been evaluated in both.","PeriodicalId":227256,"journal":{"name":"Proceedings Integration of Speech and Image Understanding","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128691135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Knowledge based image and speech analysis for service robots 基于知识的服务机器人图像和语音分析
Pub Date : 1999-09-21 DOI: 10.1109/ISIU.1999.824841
U. Ahlrichs, J. Fischer, Joachim Denzler, C. Drexler, H. Niemann, E. Noth, D. Paulus
Active visual based scene exploration as well as speech understanding and dialogue are important skills of a service robot which is employed in natural environments and has to interact with humans. In this paper we suggest a knowledge based approach for both scene exploration and spoken dialogue using semantic networks. For scene exploration the knowledge base contains information about camera movements and objects. In the dialogue system the knowledge base contains information about the individual dialogue steps as well as about syntax and semantics of utterances. In order to make use of the knowledge, an iterative control algorithm which has real-time and any-time capabilities is applied. In addition, we propose appearance based object models which can substitute the object models represented in the knowledge base for scene exploration. We show the applicability of the approach for exploration of office scenes and for spoken dialogues in the experiments. The integration of the multi-sensory input can easily be done, since the knowledge about both application domains is represented using the same network formalism.
基于主动视觉的场景探索以及语音理解和对话是服务机器人在自然环境中与人类互动的重要技能。在本文中,我们提出了一种基于知识的方法,用于场景探索和使用语义网络的口语对话。对于场景探索,知识库包含有关摄像机运动和对象的信息。在对话系统中,知识库包含关于单个对话步骤的信息以及关于话语的语法和语义的信息。为了利用这些知识,采用了一种具有实时性和任意时效性的迭代控制算法。此外,我们提出了基于外观的对象模型,该模型可以代替知识库中表示的对象模型进行场景探索。我们在实验中展示了该方法在办公室场景探索和口语对话中的适用性。多感官输入的集成可以很容易地完成,因为关于两个应用领域的知识使用相同的网络形式表示。
{"title":"Knowledge based image and speech analysis for service robots","authors":"U. Ahlrichs, J. Fischer, Joachim Denzler, C. Drexler, H. Niemann, E. Noth, D. Paulus","doi":"10.1109/ISIU.1999.824841","DOIUrl":"https://doi.org/10.1109/ISIU.1999.824841","url":null,"abstract":"Active visual based scene exploration as well as speech understanding and dialogue are important skills of a service robot which is employed in natural environments and has to interact with humans. In this paper we suggest a knowledge based approach for both scene exploration and spoken dialogue using semantic networks. For scene exploration the knowledge base contains information about camera movements and objects. In the dialogue system the knowledge base contains information about the individual dialogue steps as well as about syntax and semantics of utterances. In order to make use of the knowledge, an iterative control algorithm which has real-time and any-time capabilities is applied. In addition, we propose appearance based object models which can substitute the object models represented in the knowledge base for scene exploration. We show the applicability of the approach for exploration of office scenes and for spoken dialogues in the experiments. The integration of the multi-sensory input can easily be done, since the knowledge about both application domains is represented using the same network formalism.","PeriodicalId":227256,"journal":{"name":"Proceedings Integration of Speech and Image Understanding","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114691454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
From video to language-a detour via logic vs. jumping to conclusions 从视频到语言——通过逻辑绕路还是直接下结论
Pub Date : 1999-09-21 DOI: 10.1109/ISIU.1999.824862
H. Nagel
Temporal developments within a scene can be recorded by a video camera in the form of spatio-temporal grayvalue variations. The digitization and subsequent algorithmic evaluation of the resulting video sequence transforms, as a first step, the original signal into a geometric description which comprises the shape, position, and trajectory of bodies in the depicted 3D scene. In order to facilitate communication of this information to human users, it appears advantageous to transform such a geometric description as a second step into a fuzzy metric-temporal logic representation. This latter can be processed in turn by logic operations in order to extract the information of interest to a particular user at the time of his interaction with the system. This contribution discusses problems which show up in an attempt to specify and use a fuzzy metric-temporal logic representation of traffic situations at innercity road intersections.
场景中的时间发展可以通过摄像机以时空灰度值变化的形式记录下来。作为第一步,所得到的视频序列的数字化和随后的算法评估将原始信号转换为包含所描绘的3D场景中物体的形状、位置和轨迹的几何描述。为了便于与人类用户交流这些信息,将这种几何描述作为第二步转换为模糊度量-时间逻辑表示似乎是有利的。后者可以通过逻辑操作依次处理,以便在特定用户与系统交互时提取他感兴趣的信息。这篇文章讨论了在尝试指定和使用模糊度量-时间逻辑表示城市内十字路口交通状况时出现的问题。
{"title":"From video to language-a detour via logic vs. jumping to conclusions","authors":"H. Nagel","doi":"10.1109/ISIU.1999.824862","DOIUrl":"https://doi.org/10.1109/ISIU.1999.824862","url":null,"abstract":"Temporal developments within a scene can be recorded by a video camera in the form of spatio-temporal grayvalue variations. The digitization and subsequent algorithmic evaluation of the resulting video sequence transforms, as a first step, the original signal into a geometric description which comprises the shape, position, and trajectory of bodies in the depicted 3D scene. In order to facilitate communication of this information to human users, it appears advantageous to transform such a geometric description as a second step into a fuzzy metric-temporal logic representation. This latter can be processed in turn by logic operations in order to extract the information of interest to a particular user at the time of his interaction with the system. This contribution discusses problems which show up in an attempt to specify and use a fuzzy metric-temporal logic representation of traffic situations at innercity road intersections.","PeriodicalId":227256,"journal":{"name":"Proceedings Integration of Speech and Image Understanding","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129305051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Towards affective integration of vision, behavior, and speech processing 迈向视觉、行为和言语处理的情感整合
Pub Date : 1999-09-21 DOI: 10.1109/ISIU.1999.824850
Naoyuki Okada, Kentaro Inui, M. Tokuhisa
In each subfield of artificial intelligence such as image understanding, speech understanding, robotics, etc., a tremendous amount of research effort has so far yielded considerable results. Unfortunately, they have ended up too different to combine with one another straight-forwardly. We have been conducting a case study, or AESOPWORLD project, aiming at establishing an architectural foundation of "integrated" intelligent agents. In this article, we first review our agent model, which integrates the seven mental and the two physical faculties: recognition, planning, action, desire, emotion, memory, language, and sensor, actuator. We then describe each faculty of recognition, action, and planning, and their interaction by centering around planning. Image understanding is understood as a part of this recognition. Next, we show dialogue processing, where the faculties of recognition and planning also play an essential role for communications. Finally, we discuss the faculty of emotions to show an application of our agent to affective communications. This computation of emotions could be expected to be a base's for human-friendly interfaces.
在人工智能的每一个子领域,如图像理解、语音理解、机器人等,大量的研究工作迄今已经取得了可观的成果。不幸的是,它们最终差异太大,无法直接结合在一起。我们一直在进行一个案例研究,即AESOPWORLD项目,旨在建立一个“集成”智能代理的架构基础。在本文中,我们首先回顾了我们的智能体模型,该模型集成了七种心理和两种身体机能:识别、计划、行动、欲望、情感、记忆、语言和传感器、执行器。然后,我们描述了识别、行动和计划的每一种能力,以及它们围绕计划的相互作用。图像理解被理解为这种认识的一部分。接下来,我们将展示对话处理,其中识别和计划能力也在交流中发挥重要作用。最后,我们讨论了情感的能力,以展示我们的代理在情感交流中的应用。这种对情感的计算有望成为人类友好界面的基础。
{"title":"Towards affective integration of vision, behavior, and speech processing","authors":"Naoyuki Okada, Kentaro Inui, M. Tokuhisa","doi":"10.1109/ISIU.1999.824850","DOIUrl":"https://doi.org/10.1109/ISIU.1999.824850","url":null,"abstract":"In each subfield of artificial intelligence such as image understanding, speech understanding, robotics, etc., a tremendous amount of research effort has so far yielded considerable results. Unfortunately, they have ended up too different to combine with one another straight-forwardly. We have been conducting a case study, or AESOPWORLD project, aiming at establishing an architectural foundation of \"integrated\" intelligent agents. In this article, we first review our agent model, which integrates the seven mental and the two physical faculties: recognition, planning, action, desire, emotion, memory, language, and sensor, actuator. We then describe each faculty of recognition, action, and planning, and their interaction by centering around planning. Image understanding is understood as a part of this recognition. Next, we show dialogue processing, where the faculties of recognition and planning also play an essential role for communications. Finally, we discuss the faculty of emotions to show an application of our agent to affective communications. This computation of emotions could be expected to be a base's for human-friendly interfaces.","PeriodicalId":227256,"journal":{"name":"Proceedings Integration of Speech and Image Understanding","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115112694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Learning audio-visual associations using mutual information 利用相互信息学习视听联系
Pub Date : 1999-09-21 DOI: 10.1109/ISIU.1999.824909
D. Roy, B. Schiele, A. Pentland
This paper addresses the problem of finding useful associations between audio and visual input signals. The proposed approach is based on the maximization of mutual information of audio-visual clusters. This approach results in segmentation of continuous speech signals, and finds visual categories which correspond to segmented spoken words. Such audio-visual associations may be used for modeling infant language acquisition and to dynamically personalize speech-based human-computer interfaces for various applications including catalog browsing and wearable computing. This paper describes an implemented system for learning shape names from camera and microphone input. We present results in an evaluation of the system for the domain of modeling language learning.
本文解决了在音频和视觉输入信号之间寻找有用关联的问题。该方法基于视听聚类间互信息的最大化。该方法对连续语音信号进行分割,并找到与分割后的口语单词相对应的视觉类别。这种视听关联可用于对婴儿语言习得进行建模,并用于动态个性化用于各种应用的基于语音的人机界面,包括目录浏览和可穿戴计算。本文介绍了一种从相机和麦克风输入中学习形状名称的实现系统。我们在建模语言学习领域的系统评估中提出了结果。
{"title":"Learning audio-visual associations using mutual information","authors":"D. Roy, B. Schiele, A. Pentland","doi":"10.1109/ISIU.1999.824909","DOIUrl":"https://doi.org/10.1109/ISIU.1999.824909","url":null,"abstract":"This paper addresses the problem of finding useful associations between audio and visual input signals. The proposed approach is based on the maximization of mutual information of audio-visual clusters. This approach results in segmentation of continuous speech signals, and finds visual categories which correspond to segmented spoken words. Such audio-visual associations may be used for modeling infant language acquisition and to dynamically personalize speech-based human-computer interfaces for various applications including catalog browsing and wearable computing. This paper describes an implemented system for learning shape names from camera and microphone input. We present results in an evaluation of the system for the domain of modeling language learning.","PeriodicalId":227256,"journal":{"name":"Proceedings Integration of Speech and Image Understanding","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1999-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134257971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
期刊
Proceedings Integration of Speech and Image Understanding
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1