Natural Language Processing and Computational Linguistics

IF 3.7 2区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Computational Linguistics Pub Date : 2021-10-18 DOI:10.1162/coli_a_00420
Jun'ichi Tsujii
{"title":"Natural Language Processing and Computational Linguistics","authors":"Jun'ichi Tsujii","doi":"10.1162/coli_a_00420","DOIUrl":null,"url":null,"abstract":"away other aspects of information, such as the speaker’s empathy, distinction of old/new information, emphasis, and so on. To climb up the hierarchy led to loss of information in lower levels of representation. In Tsujii (1986), instead of mapping at the abstract level, I proposed “transfer based on a bundle of features of all the levels”, in which the transfer would refer to all levels of representation in the source language to produce a corresponding representation in the target language (Figure 4). Because different levels of representation require different geometrical structures (i.e., different tree structures), the realization of this proposal had to wait for development of a clear mathematical formulation of feature-based 6 IS (Interface Structure) is dependent on a specific language. In particular, unlike the interlingual approach, Eurotra did not assume language-independent leximemes in ISs so that the transfer phase between the two ISs (source and target ISs) was indispensable. See footnote 5. 711 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Computational Linguistics Volume 47, Number 4 Figure 4 Description-based transfer (Tsujii 1986). representation with reentrancy, which allowed multiple levels (i.e., multiple trees) to be represented with their mutual relationships (see the next section). Another idea we adopted to systematize the transfer phase was recursive transfer (Nagao and Tsujii 1986), which was inspired by the idea of compositional semantics in CL. According to the views of linguists at the time, a language is an infinite set of expressions which, in turn, is defined by a finite set of rules. By applying this finite number of rules, one can generate infinitely many grammatical sentences of the language. Compositional semantics claimed that the meaning of a phrase was determined by combining the meanings of its subphrases, using the rules that generated the phrase. Compositional translation applied the same idea to translation. That is, the translation of a phrase was determined by combining the translations of its subphrases. In this way, translations of infinitely many sentences of the source language could be generated. Using the compositional translation approach, the translation of a sentence would be undertaken by recursively tracing a tree structure of a source sentence. The translation of a phrase would then be formulated by combining the translations of its subphrases. That is, translation would be constructed in a bottom up manner, from smaller units of translation to larger units. Furthermore, because the mapping of a phrase from the source to the target would be determined by the lexical head of the phrase, the lexical entry for the head word specified how to map a phrase to the target. In the MU project, we called this lexicondriven, recursive transfer (Nagao and Tsujii 1986) (Figure 5). 712 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Tsujii Natural Language Processing and Computational Linguistics Figure 5 Lexicon-driven recursive structure transfer (Nagao and Tsujii 1986). Figure 6 Disambiguation at lexical transfer. Compared with the first-generation MT systems, which replaced source expressions with target ones in an undisciplined and ad hoc order, the order of transfer in the MU project was clearly defined and systematically performed. Lessons. Research and development of the second-generation MT systems benefitted from research into CL, allowing more clearly defined architectures and design principles than first-generation MT systems. The MU project successfully delivered EnglishJapanese and Japanese-English MT systems within the space of four years. Without these CL-driven design principles, we could not have delivered these results in such a short period of time. However, the differences between the objectives of the two disciplines also became clear. Whereas CL theories tend to focus on specific aspects of language (such as morphology, syntax, semantics, discourse, etc.), MT systems must be able to handle all aspects of information conveyed by language. As discussed, climbing up a hierarchy that focuses on propositional content alone does not result in good translation. A more serious discrepancy between CL and NLP is the treatment of ambiguities of various kinds. Disambiguation is the single most significant challenge in most NLP tasks; it requires the context in which expressions to be disambiguated occur to be processed. In other words, it requires understanding of context. 713 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Computational Linguistics Volume 47, Number 4 Typical examples of disambiguation are shown in Figure 6. The Japanese word asobu has a core meaning of “spend time without engaging in any specific useful tasks”, and would be translated into “to play”, “to have fun”, “to spend time”, “to hang around”, and so on, depending on the context (Tsujii 1986). Considering context for disambiguation contradicts with recursive transfer, since it requires larger units to be handled (i.e., the context in which a unit to be translated occurs). The nature of disambiguation made the process of recursive transfer clumsy. Disambiguation was also a major problem in the analysis phase, which I discuss in the next section. The major (although hidden) general limitation of CL or linguistics is that it tends to view language as an independent, closed system and avoids the problem of understanding, which requires reference to knowledge or non-linguistic context. However, many NLP tasks, including MT, require an understanding or interpretation of language expressions in terms of knowledge and context, which may involve other input modalities, such as visual stimuli, sound, and so forth. I discuss this in the section on the future of research. 4. Grammar Formalisms and Parsing Background and Motivation. At the time I was engaged in MT research, new developments took place in CL, namely, feature-based grammar formalisms (Kriege 1993). At its early stage, transformational grammar in theoretical linguistics by N. Chomsky assumed that sequential stages of application of tree transformation rules linked the two levels of structures, that is, deep and surface structures. A similar way of thinking was also shared by the MT community. They assumed that climbing up the hierarchy would involve sequential stages of rule application, which map from the representation at one level to another representation at the next adjacent level. Because each level of the hierarchy required its own geometrical structure, it was not considered possible to have a unified non-procedural representation, in which representations of all the levels co-exist. This view was changed by the emergence of feature-based formalisms that used directed acyclic graphs (DAGs) to allow reentrancy. Instead of mappings from one level to another, it described mutual relationships among different levels of representation in a declarative manner. This view was in line with our idea of description-based transfer, which used a bundle of features of different levels for transfer. Moreover, some grammar formalisms at the time emphasized the importance of lexical heads. That is, local structures of all the levels are constrained by the lexical head of a phrase, and these constraints are encoded in lexicon. This was also in line with our lexicon-driven transfer. A further significant development in CL took place at the same time. Namely, a number of sizable tree bank projects, most notably the Penn Treebank and the Lancaster/IBM Treebank, had reinvigorated corpus linguistics and started to have significant impacts on research into CL and NLP (Marcus et al. 1994). From the NLP point of 7 This is overgeneralization. Theoretical linguistics by N. Chomsky explicitly avoided problems related with interpretation and treated language as a closed system. Other linguistic traditions have had more relaxed, open attitudes. 8 Note that transformational grammar considered a set of rules applied to generate surface structures from the deep structure. On the hand, the “climbing-up the hierarchy” model of analysis considered a set of rules to reveal the abstract level of representation from the surface level of representation. The directions are opposite. Ambiguities did not cause problems in transformational grammar. 714 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Tsujii Natural Language Processing and Computational Linguistics view, the emergence of large tree banks led to the development of powerful tools (i.e., probabilistic models) for disambiguation. We started research that would combine these two trends to systematize the analysis phase—that is, parsing based on feature-based grammar formalisms. Research Contributions. It is often claimed that ambiguities occur because of insufficient constraints. In the analysis phase of the “climbing up the hierarchy” model, lower levels of processing could not refer to constraints in higher levels of representation. This was considered the main cause of the combinatorial explosion of ambiguities at the early stages of climbing up the hierarchy. Syntactic analysis could not refer to semantic constraints, meaning that ambiguities in syntactic analysis would explode. On the other hand, because the feature-based formalisms could describe constraints at all levels in a single unified framework, it was possible to refer to constraints at all levels, to narrow down the set of possible interpretations. However, in practice, the actual grammar was still vastly underconstrained. This was partly because we do not have effective ways of expressing semantic and pragmatic constraints. Computational linguists were interested in formal declarative ways for relating syntactic and semantic le","PeriodicalId":55229,"journal":{"name":"Computational Linguistics","volume":"47 1","pages":"707-727"},"PeriodicalIF":3.7000,"publicationDate":"2021-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computational Linguistics","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1162/coli_a_00420","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 2

Abstract

away other aspects of information, such as the speaker’s empathy, distinction of old/new information, emphasis, and so on. To climb up the hierarchy led to loss of information in lower levels of representation. In Tsujii (1986), instead of mapping at the abstract level, I proposed “transfer based on a bundle of features of all the levels”, in which the transfer would refer to all levels of representation in the source language to produce a corresponding representation in the target language (Figure 4). Because different levels of representation require different geometrical structures (i.e., different tree structures), the realization of this proposal had to wait for development of a clear mathematical formulation of feature-based 6 IS (Interface Structure) is dependent on a specific language. In particular, unlike the interlingual approach, Eurotra did not assume language-independent leximemes in ISs so that the transfer phase between the two ISs (source and target ISs) was indispensable. See footnote 5. 711 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Computational Linguistics Volume 47, Number 4 Figure 4 Description-based transfer (Tsujii 1986). representation with reentrancy, which allowed multiple levels (i.e., multiple trees) to be represented with their mutual relationships (see the next section). Another idea we adopted to systematize the transfer phase was recursive transfer (Nagao and Tsujii 1986), which was inspired by the idea of compositional semantics in CL. According to the views of linguists at the time, a language is an infinite set of expressions which, in turn, is defined by a finite set of rules. By applying this finite number of rules, one can generate infinitely many grammatical sentences of the language. Compositional semantics claimed that the meaning of a phrase was determined by combining the meanings of its subphrases, using the rules that generated the phrase. Compositional translation applied the same idea to translation. That is, the translation of a phrase was determined by combining the translations of its subphrases. In this way, translations of infinitely many sentences of the source language could be generated. Using the compositional translation approach, the translation of a sentence would be undertaken by recursively tracing a tree structure of a source sentence. The translation of a phrase would then be formulated by combining the translations of its subphrases. That is, translation would be constructed in a bottom up manner, from smaller units of translation to larger units. Furthermore, because the mapping of a phrase from the source to the target would be determined by the lexical head of the phrase, the lexical entry for the head word specified how to map a phrase to the target. In the MU project, we called this lexicondriven, recursive transfer (Nagao and Tsujii 1986) (Figure 5). 712 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Tsujii Natural Language Processing and Computational Linguistics Figure 5 Lexicon-driven recursive structure transfer (Nagao and Tsujii 1986). Figure 6 Disambiguation at lexical transfer. Compared with the first-generation MT systems, which replaced source expressions with target ones in an undisciplined and ad hoc order, the order of transfer in the MU project was clearly defined and systematically performed. Lessons. Research and development of the second-generation MT systems benefitted from research into CL, allowing more clearly defined architectures and design principles than first-generation MT systems. The MU project successfully delivered EnglishJapanese and Japanese-English MT systems within the space of four years. Without these CL-driven design principles, we could not have delivered these results in such a short period of time. However, the differences between the objectives of the two disciplines also became clear. Whereas CL theories tend to focus on specific aspects of language (such as morphology, syntax, semantics, discourse, etc.), MT systems must be able to handle all aspects of information conveyed by language. As discussed, climbing up a hierarchy that focuses on propositional content alone does not result in good translation. A more serious discrepancy between CL and NLP is the treatment of ambiguities of various kinds. Disambiguation is the single most significant challenge in most NLP tasks; it requires the context in which expressions to be disambiguated occur to be processed. In other words, it requires understanding of context. 713 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Computational Linguistics Volume 47, Number 4 Typical examples of disambiguation are shown in Figure 6. The Japanese word asobu has a core meaning of “spend time without engaging in any specific useful tasks”, and would be translated into “to play”, “to have fun”, “to spend time”, “to hang around”, and so on, depending on the context (Tsujii 1986). Considering context for disambiguation contradicts with recursive transfer, since it requires larger units to be handled (i.e., the context in which a unit to be translated occurs). The nature of disambiguation made the process of recursive transfer clumsy. Disambiguation was also a major problem in the analysis phase, which I discuss in the next section. The major (although hidden) general limitation of CL or linguistics is that it tends to view language as an independent, closed system and avoids the problem of understanding, which requires reference to knowledge or non-linguistic context. However, many NLP tasks, including MT, require an understanding or interpretation of language expressions in terms of knowledge and context, which may involve other input modalities, such as visual stimuli, sound, and so forth. I discuss this in the section on the future of research. 4. Grammar Formalisms and Parsing Background and Motivation. At the time I was engaged in MT research, new developments took place in CL, namely, feature-based grammar formalisms (Kriege 1993). At its early stage, transformational grammar in theoretical linguistics by N. Chomsky assumed that sequential stages of application of tree transformation rules linked the two levels of structures, that is, deep and surface structures. A similar way of thinking was also shared by the MT community. They assumed that climbing up the hierarchy would involve sequential stages of rule application, which map from the representation at one level to another representation at the next adjacent level. Because each level of the hierarchy required its own geometrical structure, it was not considered possible to have a unified non-procedural representation, in which representations of all the levels co-exist. This view was changed by the emergence of feature-based formalisms that used directed acyclic graphs (DAGs) to allow reentrancy. Instead of mappings from one level to another, it described mutual relationships among different levels of representation in a declarative manner. This view was in line with our idea of description-based transfer, which used a bundle of features of different levels for transfer. Moreover, some grammar formalisms at the time emphasized the importance of lexical heads. That is, local structures of all the levels are constrained by the lexical head of a phrase, and these constraints are encoded in lexicon. This was also in line with our lexicon-driven transfer. A further significant development in CL took place at the same time. Namely, a number of sizable tree bank projects, most notably the Penn Treebank and the Lancaster/IBM Treebank, had reinvigorated corpus linguistics and started to have significant impacts on research into CL and NLP (Marcus et al. 1994). From the NLP point of 7 This is overgeneralization. Theoretical linguistics by N. Chomsky explicitly avoided problems related with interpretation and treated language as a closed system. Other linguistic traditions have had more relaxed, open attitudes. 8 Note that transformational grammar considered a set of rules applied to generate surface structures from the deep structure. On the hand, the “climbing-up the hierarchy” model of analysis considered a set of rules to reveal the abstract level of representation from the surface level of representation. The directions are opposite. Ambiguities did not cause problems in transformational grammar. 714 D ow naded rom httpdirect.m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 M arch 2022 Tsujii Natural Language Processing and Computational Linguistics view, the emergence of large tree banks led to the development of powerful tools (i.e., probabilistic models) for disambiguation. We started research that would combine these two trends to systematize the analysis phase—that is, parsing based on feature-based grammar formalisms. Research Contributions. It is often claimed that ambiguities occur because of insufficient constraints. In the analysis phase of the “climbing up the hierarchy” model, lower levels of processing could not refer to constraints in higher levels of representation. This was considered the main cause of the combinatorial explosion of ambiguities at the early stages of climbing up the hierarchy. Syntactic analysis could not refer to semantic constraints, meaning that ambiguities in syntactic analysis would explode. On the other hand, because the feature-based formalisms could describe constraints at all levels in a single unified framework, it was possible to refer to constraints at all levels, to narrow down the set of possible interpretations. However, in practice, the actual grammar was still vastly underconstrained. This was partly because we do not have effective ways of expressing semantic and pragmatic constraints. Computational linguists were interested in formal declarative ways for relating syntactic and semantic le
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自然语言处理与计算语言学
忽略信息的其他方面,比如说话人的同理心、新旧信息的区分、强调等。向上层攀爬会导致较低层次表征的信息丢失。在Tsujii(1986)中,我提出了“基于一束所有层次特征的迁移”,而不是抽象层次的映射,这种迁移将参考源语言中所有层次的表征,从而在目标语言中产生相应的表征(图4)。由于不同层次的表征需要不同的几何结构(即不同的树形结构),这一建议的实现必须等待一个明确的数学公式的发展,基于6 IS(接口结构)是依赖于特定的语言。特别是,与语际方法不同的是,Eurotra没有假设语际翻译中存在与语言无关的词素,因此两个语际翻译(源语和目标语)之间的转移阶段是必不可少的。见脚注5。从httpdirect中导入。m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 march 2022计算语言学第47卷,第4号图4基于描述的转移(Tsujii 1986)。具有重入性的表示,它允许用它们的相互关系来表示多个级别(即多个树)(参见下一节)。我们采用的另一个系统化迁移阶段的想法是递归迁移(Nagao and Tsujii 1986),其灵感来自CL中的组合语义。根据当时语言学家的观点,语言是一组无限的表达,而这些表达又由一组有限的规则来定义。通过应用这些有限数量的规则,人们可以生成无限多的语言语法句子。组合语义学声称,一个短语的意思是通过使用生成短语的规则,将其子短语的意思组合在一起来确定的。作文翻译将同样的思想应用于翻译。也就是说,一个短语的翻译是通过结合它的子短语的翻译来确定的。通过这种方式,可以生成无限多的源语言句子的翻译。使用组合翻译方法,句子的翻译将通过递归地跟踪源句子的树状结构来进行。然后,一个短语的翻译将通过结合其子短语的翻译来制定。也就是说,翻译将以自下而上的方式构建,从较小的翻译单位到较大的翻译单位。此外,由于短语从源到目标的映射将由短语的词头决定,因此词头词的词法条目指定了如何将短语映射到目标。在MU项目中,我们称之为词典驱动的递归传输(Nagao and Tsujii 1986)(图5)。图5词典驱动的递归结构转移(Nagao and Tsujii 1986)。图6词汇转移时的消歧。第一代机器翻译系统用目标表达式替换源表达式的顺序杂乱无章,与之相比,MU项目中的传递顺序是明确定义和系统执行的。教训。第二代MT系统的研究和开发得益于对CL的研究,允许比第一代MT系统更清晰地定义架构和设计原则。MU项目在四年的时间内成功交付了英语-日语和日语-英语MT系统。如果没有这些cl驱动的设计原则,我们不可能在这么短的时间内交付这些结果。然而,这两个学科的目标之间的差异也变得清晰起来。CL理论倾向于关注语言的特定方面(如形态学、句法、语义、话语等),而MT系统必须能够处理语言传达的信息的所有方面。如前所述,仅关注命题内容的层次结构并不会产生好的翻译。CL和NLP之间更严重的差异是对各种歧义的处理。消歧义是大多数NLP任务中最重要的挑战;它要求处理要消除歧义的表达式所在的上下文。换句话说,它需要理解上下文。从httpdirect中导入。m it.edu/coli/article-p7/1979478/coli_a_00420.pdf by gest on 04 m march 2022计算语言学第47卷,第4号消歧的典型例子如图6所示。 日语单词“asobu”的核心含义是“花时间而不参与任何特定的有用任务”,根据上下文可以翻译成“玩”、“玩得开心”、“花时间”、“闲逛”等等。考虑消除歧义的上下文与递归转换相矛盾,因为它需要处理更大的单元(即,要翻译的单元所在的上下文)。消歧义的性质使得递归传递过程变得笨拙。消除歧义也是分析阶段的一个主要问题,我将在下一节中讨论这个问题。语言学习或语言学的主要(虽然是隐藏的)普遍局限是,它倾向于将语言视为一个独立的、封闭的系统,避免了理解问题,而理解问题需要参考知识或非语言语境。然而,许多NLP任务,包括机器翻译,需要从知识和上下文方面理解或解释语言表达,这可能涉及其他输入方式,如视觉刺激、声音等。我将在关于未来研究的章节中讨论这一点。4. 语法形式与解析背景与动机。在我从事机器翻译研究的时候,CL有了新的发展,即基于特征的语法形式主义(Kriege 1993)。乔姆斯基(N. Chomsky)理论语言学中的转换语法在其早期阶段假定树形转换规则的顺序应用阶段将结构的两个层次,即深层结构和表层结构联系起来。MT社区也有类似的想法。他们假设在层次结构中向上爬将涉及规则应用的顺序阶段,这些阶段从一个级别的表示映射到下一个相邻级别的另一个表示。因为每一层次都需要自己的几何结构,所以不可能有统一的非程序性表示,使所有层次的表示并存。这种观点被基于特征的形式化的出现所改变,这些形式化使用有向无环图(dag)来允许重入。它不是从一个层映射到另一个层,而是以声明的方式描述不同表示层之间的相互关系。这种观点与我们基于描述的迁移的想法是一致的,它使用了一组不同级别的特征来进行迁移。此外,当时的一些语法形式主义强调词头的重要性。也就是说,所有级别的局部结构都受到短语词头的约束,这些约束被编码到lexicon中。这也符合我们的词典驱动的迁移。与此同时,CL的进一步重大发展发生了。也就是说,一些规模可观的树库项目,最著名的是宾夕法尼亚树库和兰开斯特/IBM树库,重新激活了语料库语言学,并开始对CL和NLP的研究产生重大影响(Marcus et al. 1994)。从NLP的角度来看,这是过度概括。乔姆斯基的理论语言学明确地回避了与解释有关的问题,将语言视为一个封闭的系统。其他的语言传统有着更轻松、开放的态度。注意,转换语法考虑了一组用于从深层结构生成表层结构的规则。另一方面,“层次攀升”分析模型考虑了一套规则,从表象层次揭示表象的抽象层次。方向相反。歧义在转换语法中不会引起问题。从httpdirect中导入。Tsujii自然语言处理和计算语言学的观点认为,大型树库的出现导致了消除歧义的强大工具(即概率模型)的发展。我们开始研究将这两种趋势结合起来,使分析阶段系统化,即基于基于特征的语法形式的解析。研究的贡献。人们经常声称,模糊性的出现是由于约束不足造成的。在“爬上层次”模型的分析阶段,较低层次的处理不能引用较高层次表示中的约束。这被认为是在层次结构上升的早期阶段模糊性组合爆炸的主要原因。句法分析不能引用语义约束,这意味着句法分析中的歧义将会爆发。另一方面,由于基于特征的形式化可以在单个统一框架中描述所有级别的约束,因此可以参考所有级别的约束,从而缩小可能的解释集合。然而,在实践中,实际的语法仍然非常缺乏约束。 这部分是因为我们没有表达语义和语用约束的有效方法。计算语言学家对将句法和语义联系起来的形式化声明方式感兴趣
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computational Linguistics
Computational Linguistics 工程技术-计算机:跨学科应用
CiteScore
15.80
自引率
0.00%
发文量
45
审稿时长
>12 weeks
期刊介绍: Computational Linguistics, the longest-running publication dedicated solely to the computational and mathematical aspects of language and the design of natural language processing systems, provides university and industry linguists, computational linguists, AI and machine learning researchers, cognitive scientists, speech specialists, and philosophers with the latest insights into the computational aspects of language research.
期刊最新文献
Generation and Polynomial Parsing of Graph Languages with Non-Structural Reentrancies Languages through the Looking Glass of BPE Compression Capturing Fine-Grained Regional Differences in Language Use through Voting Precinct Embeddings Machine Learning for Ancient Languages: A Survey Statistical Methods for Annotation Analysis by Silviu Paun, Ron Artstein, and Massimo Poesio
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1