首页 > 最新文献

Russian Journal of Linguistics最新文献

英文 中文
A semiotic portrait of a big Chinese city 一个中国大城市的象征性肖像
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-09-30 DOI: 10.22363/2687-0088-31228
O. Leontovich, N. Kotelnikova
Urban communication studies is a growing field of research aiming to reveal the regularities of human interaction in an urban context. The goal of the present study is to examine the semiotics of a big Chinese city as a complex communicative system and its effect on the social development of urban community. The material includes over 700 units (toponyms, street signs, advertisements, memorials, local foods and souvenirs, mass media, etc.) mostly collected in Tianjin, China’s fourth biggest city with a population of almost 14 million people. The research methodology is based on critical discourse analysis, ethnographic and semiotic methods, and narrative analysis. The study reveals the structure of communication in a big Chinese city and the integration of language into the city landscape. It indicates that urban historical memories are manifested in the form of memorials, symbols, historic and contemporary narratives. The physical context is associated with names of streets and other topological objects. Verbal and visual semiotic signs are used to ensure people’s psychological and physical safety. Social advertising predominantly deals with the propaganda of Chinese governmental policy, traditional values and ‘civilized behaviour’. Chinese urban subcultures, such as ‘ant tribe, ‘pendulums’, ‘shamate’, etc., reflect new social realities. Food and foodways are defined by cultural values and different aspects of social identity. The image of a big Chinese city is also affected by globalization tendencies and the COVID-19 pandemic. The research framework presented in the study provides an opportunity to show a wide panorama of modern urban life. It can be extrapolated to the investigation of other big cities and their linguistic landscapes.
城市传播学是一个新兴的研究领域,旨在揭示城市环境中人类互动的规律。本研究旨在探讨中国大城市作为一个复杂的交际系统的符号学及其对城市社区社会发展的影响。这些材料包括700多个单元(地名、街道标志、广告、纪念碑、当地食品和纪念品、大众媒体等),大部分收集于天津,中国第四大城市,拥有近1400万人口。研究方法以批判性话语分析、民族志和符号学方法以及叙事分析为基础。该研究揭示了中国大城市的沟通结构以及语言与城市景观的融合。它表明城市历史记忆以纪念物、符号、历史和当代叙事的形式表现出来。物理上下文与街道和其他拓扑对象的名称相关联。使用语言和视觉符号来确保人们的心理和身体安全。社会广告主要涉及中国政府政策、传统价值观和“文明行为”的宣传。中国城市亚文化,如“蚁族”、“钟摆”、“草莽”等,反映了新的社会现实。食物和饮食方式是由文化价值观和社会认同的不同方面来定义的。中国大城市的形象也受到全球化趋势和新冠肺炎疫情的影响。研究中提出的研究框架提供了一个展示现代城市生活全景的机会。它可以外推到对其他大城市及其语言景观的调查中。
{"title":"A semiotic portrait of a big Chinese city","authors":"O. Leontovich, N. Kotelnikova","doi":"10.22363/2687-0088-31228","DOIUrl":"https://doi.org/10.22363/2687-0088-31228","url":null,"abstract":"Urban communication studies is a growing field of research aiming to reveal the regularities of human interaction in an urban context. The goal of the present study is to examine the semiotics of a big Chinese city as a complex communicative system and its effect on the social development of urban community. The material includes over 700 units (toponyms, street signs, advertisements, memorials, local foods and souvenirs, mass media, etc.) mostly collected in Tianjin, China’s fourth biggest city with a population of almost 14 million people. The research methodology is based on critical discourse analysis, ethnographic and semiotic methods, and narrative analysis. The study reveals the structure of communication in a big Chinese city and the integration of language into the city landscape. It indicates that urban historical memories are manifested in the form of memorials, symbols, historic and contemporary narratives. The physical context is associated with names of streets and other topological objects. Verbal and visual semiotic signs are used to ensure people’s psychological and physical safety. Social advertising predominantly deals with the propaganda of Chinese governmental policy, traditional values and ‘civilized behaviour’. Chinese urban subcultures, such as ‘ant tribe, ‘pendulums’, ‘shamate’, etc., reflect new social realities. Food and foodways are defined by cultural values and different aspects of social identity. The image of a big Chinese city is also affected by globalization tendencies and the COVID-19 pandemic. The research framework presented in the study provides an opportunity to show a wide panorama of modern urban life. It can be extrapolated to the investigation of other big cities and their linguistic landscapes.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"1 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80533769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of A.Ya. Shajkevich, V.M. Andryushchenko, N.A. Rebeckaya. 2021. Distributive-statistical analysis of the language of Russian prose of the1850-1870s, vol. 3. Publishing House YaSK, Moscow. ISBN 978-5-907290-61-7
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-06-29 DOI: 10.22363/2687-0088-30307
V. Bayrasheva
-
-
{"title":"Review of A.Ya. Shajkevich, V.M. Andryushchenko, N.A. Rebeckaya. 2021. Distributive-statistical analysis of the language of Russian prose of the1850-1870s, vol. 3. Publishing House YaSK, Moscow. ISBN 978-5-907290-61-7","authors":"V. Bayrasheva","doi":"10.22363/2687-0088-30307","DOIUrl":"https://doi.org/10.22363/2687-0088-30307","url":null,"abstract":"<jats:p>-</jats:p>","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"52 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75953177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Natural language processing and discourse complexity studies 自然语言处理与语篇复杂性研究
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-06-29 DOI: 10.22363/2687-0088-30171
M. Solnyshkina, D. McNamara, R. Zamaletdinov
The study presents an overview of discursive complexology, an integral paradigm of linguistics, cognitive studies and computer linguistics aimed at defining discourse complexity. The article comprises three main parts, which successively outline views on the category of linguistic complexity, history of discursive complexology and modern methods of text complexity assessment. Distinguishing the concepts of linguistic complexity, text and discourse complexity, we recognize an absolute nature of text complexity assessment and relative nature of discourse complexity, determined by linguistic and cognitive abilities of a recipient. Founded in the 19th century, text complexity theory is still focused on defining and validating complexity predictors and criteria for text perception difficulty. We briefly characterize the five previous stages of discursive complexology: formative, classical, period of closed tests, constructive-cognitive and period of natural language processing. We also present the theoretical foundations of Coh-Metrix, an automatic analyzer, based on a five-level cognitive model of perception. Computing not only lexical and syntactic parameters, but also text level parameters, situational models and rhetorical structures, Coh-Metrix provides a high level of accuracy of discourse complexity assessment. We also show the benefits of natural language processing models and a wide range of application areas of text profilers and digital platforms such as LEXILE and ReaderBench. We view parametrization and development of complexity matrix of texts of various genres as the nearest prospect for the development of discursive complexology which may enable a higher accuracy of inter- and intra-linguistic contrastive studies, as well as automating selection and modification of texts for various pragmatic purposes.
本文概述了语篇复杂性学,这是语言学、认知研究和计算机语言学的一个整体范式,旨在定义语篇复杂性。本文分为三个主要部分,分别概述了语言复杂性的范畴、话语复杂性学的历史和现代文本复杂性评价方法。通过区分语言复杂性、文本复杂性和话语复杂性的概念,我们认识到文本复杂性评估的绝对性质和话语复杂性的相对性质,这是由接受者的语言和认知能力决定的。文本复杂性理论建立于19世纪,其重点仍然是定义和验证文本感知困难的复杂性预测因子和标准。我们简要地描述了话语复杂学的前五个阶段:形成阶段、经典阶段、封闭测试阶段、建构-认知阶段和自然语言处理阶段。我们也提出了Coh-Metrix的理论基础,一个自动分析仪,基于一个五级认知模型的感知。Coh-Metrix不仅计算词汇和句法参数,还计算文本层次参数、情景模型和修辞结构,从而提供了高水平的语篇复杂性评估准确性。我们还展示了自然语言处理模型的好处,以及文本分析器和数字平台(如LEXILE和ReaderBench)的广泛应用领域。我们认为,各种体裁文本复杂性矩阵的参数化和发展是话语复杂性学发展的最新前景,它可以提高语言间和语言内对比研究的准确性,以及为各种语用目的自动选择和修改文本。
{"title":"Natural language processing and discourse complexity studies","authors":"M. Solnyshkina, D. McNamara, R. Zamaletdinov","doi":"10.22363/2687-0088-30171","DOIUrl":"https://doi.org/10.22363/2687-0088-30171","url":null,"abstract":"The study presents an overview of discursive complexology, an integral paradigm of linguistics, cognitive studies and computer linguistics aimed at defining discourse complexity. The article comprises three main parts, which successively outline views on the category of linguistic complexity, history of discursive complexology and modern methods of text complexity assessment. Distinguishing the concepts of linguistic complexity, text and discourse complexity, we recognize an absolute nature of text complexity assessment and relative nature of discourse complexity, determined by linguistic and cognitive abilities of a recipient. Founded in the 19th century, text complexity theory is still focused on defining and validating complexity predictors and criteria for text perception difficulty. We briefly characterize the five previous stages of discursive complexology: formative, classical, period of closed tests, constructive-cognitive and period of natural language processing. We also present the theoretical foundations of Coh-Metrix, an automatic analyzer, based on a five-level cognitive model of perception. Computing not only lexical and syntactic parameters, but also text level parameters, situational models and rhetorical structures, Coh-Metrix provides a high level of accuracy of discourse complexity assessment. We also show the benefits of natural language processing models and a wide range of application areas of text profilers and digital platforms such as LEXILE and ReaderBench. We view parametrization and development of complexity matrix of texts of various genres as the nearest prospect for the development of discursive complexology which may enable a higher accuracy of inter- and intra-linguistic contrastive studies, as well as automating selection and modification of texts for various pragmatic purposes.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"27 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86193248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Computational linguistics and discourse complexology: Paradigms and research methods 计算语言学与话语复杂学:范式与研究方法
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-06-29 DOI: 10.22363/2687-0088-31326
V. Solovyev, M. Solnyshkina, D. McNamara
The dramatic expansion of modern linguistic research and enhanced accuracy of linguistic analysis have become a reality due to the ability of artificial neural networks not only to learn and adapt, but also carry out automate linguistic analysis, select, modify and compare texts of various types and genres. The purpose of this article and the journal issue as a whole is to present modern areas of research in computational linguistics and linguistic complexology, as well as to define a solid rationale for the new interdisciplinary field, i.e. discourse complexology. The review of trends in computational linguistics focuses on the following aspects of research: applied problems and methods, computational linguistic resources, contribution of theoretical linguistics to computational linguistics, and the use of deep learning neural networks. The special issue also addresses the problem of objective and relative text complexity and its assessment. We focus on the two main approaches to linguistic complexity assessment: “parametric approach” and machine learning. The findings of the studies published in this special issue indicate a major contribution of computational linguistics to discourse complexology, including new algorithms developed to solve discourse complexology problems. The issue outlines the research areas of linguistic complexology and provides a framework to guide its further development including a design of a complexity matrix for texts of various types and genres, refining the list of complexity predictors, validating new complexity criteria, and expanding databases for natural language.
由于人工神经网络不仅具有学习和适应的能力,而且能够自动进行语言分析,选择、修改和比较各种类型和体裁的文本,现代语言学研究的急剧扩展和语言分析准确性的提高已经成为现实。这篇文章和这期杂志作为一个整体的目的是呈现计算语言学和语言复杂学的现代研究领域,并为新的跨学科领域,即话语复杂学,定义一个坚实的基础。计算语言学的发展趋势主要集中在以下几个方面:应用问题和方法、计算语言学资源、理论语言学对计算语言学的贡献以及深度学习神经网络的应用。该专题还讨论了客观和相对文本复杂性及其评估的问题。我们关注语言复杂性评估的两种主要方法:“参数方法”和机器学习。发表在本期特刊上的研究结果表明,计算语言学对语篇复杂学做出了重大贡献,包括为解决语篇复杂学问题而开发的新算法。该问题概述了语言复杂性学的研究领域,并提供了指导其进一步发展的框架,包括为各种类型和体裁的文本设计复杂性矩阵,改进复杂性预测因子列表,验证新的复杂性标准,以及扩展自然语言数据库。
{"title":"Computational linguistics and discourse complexology: Paradigms and research methods","authors":"V. Solovyev, M. Solnyshkina, D. McNamara","doi":"10.22363/2687-0088-31326","DOIUrl":"https://doi.org/10.22363/2687-0088-31326","url":null,"abstract":"The dramatic expansion of modern linguistic research and enhanced accuracy of linguistic analysis have become a reality due to the ability of artificial neural networks not only to learn and adapt, but also carry out automate linguistic analysis, select, modify and compare texts of various types and genres. The purpose of this article and the journal issue as a whole is to present modern areas of research in computational linguistics and linguistic complexology, as well as to define a solid rationale for the new interdisciplinary field, i.e. discourse complexology. The review of trends in computational linguistics focuses on the following aspects of research: applied problems and methods, computational linguistic resources, contribution of theoretical linguistics to computational linguistics, and the use of deep learning neural networks. The special issue also addresses the problem of objective and relative text complexity and its assessment. We focus on the two main approaches to linguistic complexity assessment: “parametric approach” and machine learning. The findings of the studies published in this special issue indicate a major contribution of computational linguistics to discourse complexology, including new algorithms developed to solve discourse complexology problems. The issue outlines the research areas of linguistic complexology and provides a framework to guide its further development including a design of a complexity matrix for texts of various types and genres, refining the list of complexity predictors, validating new complexity criteria, and expanding databases for natural language.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"4 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72903542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Word frequency and text complexity: an eye-tracking study of young Russian readers 词频与文本复杂性:对年轻俄语读者的眼动追踪研究
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-06-29 DOI: 10.22363/2687-0088-30084
A. Laposhina, M. Lebedeva, Alexandra Berlin Khenis
Although word frequency is often associated with the cognitive load on the reader and is widely used for automated text complexity assessment, to date, no eye-tracking data have been obtained on the effectiveness of this parameter for text complexity prediction for the Russian primary school readers. Besides, the optimal ways for taking into account the frequency of individual words to assess an entire text complexity have not yet been precisely determined. This article aims to fill these gaps. The study was conducted on a sample of 53 children of primary school age. As a stimulus material, we used 6 texts that differ in the classical Flesch readability formula and data on the frequency of words in texts. As sources of the frequency data, we used the common frequency dictionary based on the material of the Russian National Corpus and DetCorpus - the corpus of literature addressed to children. The speed of reading the text aloud in words per minute averaged over the grades was employed as a measure of the text complexity. The best predictive results of the relative reading time were obtained using the lemma frequency data from the DetCorpus. At the text level, the highest correlation with the reading speed was shown by the text coverage with a list of 5,000 most frequent words, while both sources of the lists - Russian National Corpus and DetCorpus - showed almost the same correlation values. For a more detailed analysis, we also calculated the correlation of the frequency parameters of specific word forms and lemmas with three parameters of oculomotor activity: the dwell time, fixations count, and the average duration of fixations. At the word-by-word level, the lemma frequency by DetCorpus demonstrated the highest correlation with the relative reading time. The results we obtained confirm the feasibility of using frequency data in the text complexity assessment task for primary school children and demonstrate the optimal ways to calculate frequency data.
虽然词频通常与读者的认知负荷有关,并被广泛用于自动文本复杂性评估,但迄今为止,还没有关于该参数对俄罗斯小学读者文本复杂性预测有效性的眼动追踪数据。此外,考虑单个单词的频率来评估整个文本复杂性的最佳方法尚未精确确定。本文旨在填补这些空白。这项研究对53名小学适龄儿童进行了抽样调查。作为刺激材料,我们使用了6个不同于经典Flesch可读性公式的文本和文本中单词频率的数据。作为频率数据的来源,我们使用了基于俄罗斯国家语料库和DetCorpus(面向儿童的文学语料库)材料的公共频率词典。每分钟大声朗读课文的平均速度(单词数)被用来衡量课文的复杂程度。利用DetCorpus的引理频率数据获得了相对阅读时间的最佳预测结果。在文本层面,与阅读速度的相关性最高的是包含5000个最常见单词的文本覆盖,而列表的两个来源——俄罗斯国家语料库和DetCorpus——显示出几乎相同的相关性值。为了进行更详细的分析,我们还计算了特定词形和引理的频率参数与眼动活动的三个参数的相关性:停留时间、注视次数和平均注视时间。在逐词水平上,DetCorpus的引词频率与相对阅读时间的相关性最高。研究结果证实了频率数据在小学生文本复杂性评价任务中的可行性,并给出了频率数据的最佳计算方法。
{"title":"Word frequency and text complexity: an eye-tracking study of young Russian readers","authors":"A. Laposhina, M. Lebedeva, Alexandra Berlin Khenis","doi":"10.22363/2687-0088-30084","DOIUrl":"https://doi.org/10.22363/2687-0088-30084","url":null,"abstract":"Although word frequency is often associated with the cognitive load on the reader and is widely used for automated text complexity assessment, to date, no eye-tracking data have been obtained on the effectiveness of this parameter for text complexity prediction for the Russian primary school readers. Besides, the optimal ways for taking into account the frequency of individual words to assess an entire text complexity have not yet been precisely determined. This article aims to fill these gaps. The study was conducted on a sample of 53 children of primary school age. As a stimulus material, we used 6 texts that differ in the classical Flesch readability formula and data on the frequency of words in texts. As sources of the frequency data, we used the common frequency dictionary based on the material of the Russian National Corpus and DetCorpus - the corpus of literature addressed to children. The speed of reading the text aloud in words per minute averaged over the grades was employed as a measure of the text complexity. The best predictive results of the relative reading time were obtained using the lemma frequency data from the DetCorpus. At the text level, the highest correlation with the reading speed was shown by the text coverage with a list of 5,000 most frequent words, while both sources of the lists - Russian National Corpus and DetCorpus - showed almost the same correlation values. For a more detailed analysis, we also calculated the correlation of the frequency parameters of specific word forms and lemmas with three parameters of oculomotor activity: the dwell time, fixations count, and the average duration of fixations. At the word-by-word level, the lemma frequency by DetCorpus demonstrated the highest correlation with the relative reading time. The results we obtained confirm the feasibility of using frequency data in the text complexity assessment task for primary school children and demonstrate the optimal ways to calculate frequency data.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"8 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73633836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Collection and evaluation of lexical complexity data for Russian language using crowdsourcing 俄语词汇复杂性数据的众包收集与评价
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-06-29 DOI: 10.22363/2687-0088-30118
A. Abramov, Vladimir Ivanov
Estimating word complexity with binary or continuous scores is a challenging task that has been studied for several domains and natural languages. Commonly this task is referred to as Complex Word Identification (CWI) or Lexical Complexity Prediction (LCP). Correct evaluation of word complexity can be an important step in many Lexical Simplification pipelines. Earlier works have usually presented methodologies of lexical complexity estimation with several restrictions: hand-crafted features correlated with word complexity, performed feature engineering to describe target words with features such as number of hypernyms, count of consonants, Named Entity tag, and evaluations with carefully selected target audiences. Modern works investigated the use of transforner-based models that afford extracting features from surrounding context as well. However, the majority of papers have been devoted to pipelines for the English language and few translated them to other languages such as German, French, and Spanish. In this paper we present a dataset of lexical complexity in context based on the Russian Synodal Bible collected using a crowdsourcing platform. We describe a methodology for collecting the data using a 5-point Likert scale for annotation, present descriptive statistics and compare results with analogous work for the English language. We evaluate a linear regression model as a baseline for predicting word complexity on handcrafted features, fastText and ELMo embeddings of target words. The result is a corpus consisting of 931 distinct words that used in 3,364 different contexts.
用二值或连续分数估计词复杂度是一项具有挑战性的任务,已经在多个领域和自然语言中进行了研究。通常,这项任务被称为复杂词识别(CWI)或词汇复杂性预测(LCP)。在许多词法简化管道中,正确评估单词复杂性是一个重要步骤。早期的工作通常提出了一些有限制的词汇复杂性估计方法:手工制作与单词复杂性相关的特征,执行特征工程来描述目标单词的特征,如中音的数量、辅音的数量、命名实体标签,以及与精心选择的目标受众进行评估。现代作品研究了基于变压器的模型的使用,该模型也可以从周围环境中提取特征。然而,大多数论文都致力于英语的管道,很少将它们翻译成其他语言,如德语、法语和西班牙语。在本文中,我们提出了一个使用众包平台收集的基于俄语会议圣经的上下文词汇复杂性数据集。我们描述了一种收集数据的方法,使用5点李克特量表进行注释,提供描述性统计数据,并将结果与英语语言的类似工作进行比较。我们评估了一个线性回归模型作为预测目标词的手工特征、快速文本和ELMo嵌入的词复杂度的基线。结果是一个由931个不同的单词组成的语料库,这些单词在3364个不同的上下文中使用。
{"title":"Collection and evaluation of lexical complexity data for Russian language using crowdsourcing","authors":"A. Abramov, Vladimir Ivanov","doi":"10.22363/2687-0088-30118","DOIUrl":"https://doi.org/10.22363/2687-0088-30118","url":null,"abstract":"Estimating word complexity with binary or continuous scores is a challenging task that has been studied for several domains and natural languages. Commonly this task is referred to as Complex Word Identification (CWI) or Lexical Complexity Prediction (LCP). Correct evaluation of word complexity can be an important step in many Lexical Simplification pipelines. Earlier works have usually presented methodologies of lexical complexity estimation with several restrictions: hand-crafted features correlated with word complexity, performed feature engineering to describe target words with features such as number of hypernyms, count of consonants, Named Entity tag, and evaluations with carefully selected target audiences. Modern works investigated the use of transforner-based models that afford extracting features from surrounding context as well. However, the majority of papers have been devoted to pipelines for the English language and few translated them to other languages such as German, French, and Spanish. In this paper we present a dataset of lexical complexity in context based on the Russian Synodal Bible collected using a crowdsourcing platform. We describe a methodology for collecting the data using a 5-point Likert scale for annotation, present descriptive statistics and compare results with analogous work for the English language. We evaluate a linear regression model as a baseline for predicting word complexity on handcrafted features, fastText and ELMo embeddings of target words. The result is a corpus consisting of 931 distinct words that used in 3,364 different contexts.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"1 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80817390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cognitive linguistic approach to analysis and correction of orthographic errors 正字法错误分析与纠正的认知语言学方法
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-06-29 DOI: 10.22363/2687-0088-30122
Robert Joshua Reynolds, L. Janda, T. Nesset
In this paper, we apply usage-based linguistic analysis to systematize the inventory of orthographic errors observed in the writing of non-native users of Russian. The data comes from a longitudinal corpus (560K tokens) of non-native academic writing. Traditional spellcheckers mark errors and suggest corrections, but do not attempt to model why errors are made. Our approach makes it possible to recognize not only the errors themselves, but also the conceptual causes of these errors, which lie in misunderstandings of Russian phonotactics and morphophonology and the way they are represented by orthographic conventions. With this linguistically-based system in place, we can propose targeted grammar explanations that improve users’ command of Russian morphophonology rather than merely correcting errors. Based on errors attested in the non-native academic writing corpus, we introduce a taxonomy of errors, organized by pedagogical domains. Then, on the basis of this taxonomy, we create a set of mal-rules to expand an existing finite-state analyzer of Russian. The resulting morphological analyzer tags wordforms that fit our taxonomy with specific error tags. For each error tag, we also develop an accompanying grammar explanation to help users understand why and how to correct the diagnosed errors. Using our augmented analyzer, we build a webapp to allow users to type or paste a text and receive detailed feedback and correction on common Russian morphophonological and orthographic errors.
在本文中,我们运用基于用法的语言学分析,对非母语俄语使用者在写作中所观察到的正字法错误进行了系统化分析。数据来自非母语学术写作的纵向语料库(560K个标记)。传统的拼写检查器会标记错误并建议更正,但不会试图对错误产生的原因进行建模。我们的方法不仅可以识别错误本身,还可以识别这些错误的概念原因,这些原因在于对俄语语音战术和词形的误解以及它们用正字法惯例表示的方式。有了这个基于语言学的系统,我们可以提出有针对性的语法解释,提高用户对俄语语音学的掌握,而不仅仅是纠正错误。基于在非母语学术写作语料库中所证实的错误,我们介绍了一种按教学领域组织的错误分类。然后,在这个分类法的基础上,我们创建了一组错误规则来扩展现有的俄语有限状态分析器。生成的形态分析器用特定的错误标记标记符合我们分类法的词的形式。对于每个错误标记,我们还开发了附带的语法解释,以帮助用户理解为什么以及如何纠正诊断出的错误。使用我们的增强分析器,我们建立了一个web应用程序,允许用户输入或粘贴文本,并接收详细的反馈和纠正常见的俄语词音和正字法错误。
{"title":"A cognitive linguistic approach to analysis and correction of orthographic errors","authors":"Robert Joshua Reynolds, L. Janda, T. Nesset","doi":"10.22363/2687-0088-30122","DOIUrl":"https://doi.org/10.22363/2687-0088-30122","url":null,"abstract":"In this paper, we apply usage-based linguistic analysis to systematize the inventory of orthographic errors observed in the writing of non-native users of Russian. The data comes from a longitudinal corpus (560K tokens) of non-native academic writing. Traditional spellcheckers mark errors and suggest corrections, but do not attempt to model why errors are made. Our approach makes it possible to recognize not only the errors themselves, but also the conceptual causes of these errors, which lie in misunderstandings of Russian phonotactics and morphophonology and the way they are represented by orthographic conventions. With this linguistically-based system in place, we can propose targeted grammar explanations that improve users’ command of Russian morphophonology rather than merely correcting errors. Based on errors attested in the non-native academic writing corpus, we introduce a taxonomy of errors, organized by pedagogical domains. Then, on the basis of this taxonomy, we create a set of mal-rules to expand an existing finite-state analyzer of Russian. The resulting morphological analyzer tags wordforms that fit our taxonomy with specific error tags. For each error tag, we also develop an accompanying grammar explanation to help users understand why and how to correct the diagnosed errors. Using our augmented analyzer, we build a webapp to allow users to type or paste a text and receive detailed feedback and correction on common Russian morphophonological and orthographic errors.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"31 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91023182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Word-formation complexity: a learner corpus-based study 构词复杂性:基于学习者语料库的研究
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-06-29 DOI: 10.22363/2687-0088-31187
O. Lyashevskaya, Julia Vyacheslavovna Pyzhak, Olga Vinogradova
This article explores the word-formation dimension of learner text complexity which indicates how skilful the non-native speakers are in using more and less complex - and varied - derivational constructions. In order to analyse the association between complexity and writing accuracy in word formation as well as interactive effects of task type, text register, and native language background, we examine the materials of the REALEC corpus of English essays written by university students with Russian L1. We present an approach to measure derivational complexity based on the classification of suffixes offered in Bauer and Nation (1993) and then compare the complexity results and the number of word formation errors annotated in the texts. Starting with the hypothesis that with increasing complexity the number of errors will decrease, we apply statistical analysis to examine the association between complexity and accuracy. We found, first, that the use of more advanced word-formation suffixes affects the number of errors in texts. Second, different levels of suffixes in the hierarchy affect derivation accuracy in different ways. In particular, the use of irregular derivational models is positively associated with the number of errors. Third, the type of examination task and expected format and register of writing should be taken into consideration. The hypothesis holds true for regular but infrequent advanced suffixal models used in more formal descriptive essays associated with an academic register. However, for less formal texts with lower academic register requirements, the hypothesis needs to be amended.
本文探讨了学习者语篇复杂性的构词维度,它反映了非本族语使用者在使用或多或少复杂或多样的衍生结构方面的熟练程度。为了分析构词法的复杂性和写作准确性之间的关系,以及任务类型、文本域和母语背景的交互作用,我们研究了REALEC语料库中俄语母语大学生写的英语论文的材料。我们提出了一种基于Bauer和Nation(1993)提供的后缀分类来衡量衍生复杂性的方法,然后比较复杂性结果和文本中注释的构词法错误的数量。假设随着复杂性的增加,错误的数量会减少,我们应用统计分析来检验复杂性和准确性之间的关系。我们发现,首先,使用更高级的构词法后缀会影响文本中错误的数量。其次,不同层次的词缀对派生精度的影响也不同。特别是,不规则衍生模型的使用与误差的数量呈正相关。第三,应考虑考试任务的类型和预期的写作格式和写作范围。该假设适用于与学术注册相关的更正式的描述性文章中使用的常规但不常见的高级后缀模型。然而,对于学术注册要求较低的非正式文本,该假设需要修正。
{"title":"Word-formation complexity: a learner corpus-based study","authors":"O. Lyashevskaya, Julia Vyacheslavovna Pyzhak, Olga Vinogradova","doi":"10.22363/2687-0088-31187","DOIUrl":"https://doi.org/10.22363/2687-0088-31187","url":null,"abstract":"This article explores the word-formation dimension of learner text complexity which indicates how skilful the non-native speakers are in using more and less complex - and varied - derivational constructions. In order to analyse the association between complexity and writing accuracy in word formation as well as interactive effects of task type, text register, and native language background, we examine the materials of the REALEC corpus of English essays written by university students with Russian L1. We present an approach to measure derivational complexity based on the classification of suffixes offered in Bauer and Nation (1993) and then compare the complexity results and the number of word formation errors annotated in the texts. Starting with the hypothesis that with increasing complexity the number of errors will decrease, we apply statistical analysis to examine the association between complexity and accuracy. We found, first, that the use of more advanced word-formation suffixes affects the number of errors in texts. Second, different levels of suffixes in the hierarchy affect derivation accuracy in different ways. In particular, the use of irregular derivational models is positively associated with the number of errors. Third, the type of examination task and expected format and register of writing should be taken into consideration. The hypothesis holds true for regular but infrequent advanced suffixal models used in more formal descriptive essays associated with an academic register. However, for less formal texts with lower academic register requirements, the hypothesis needs to be amended.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"32 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78381427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The negotiation of authorial persona in dissertations literature review and discussion sections 论文、文献综述和讨论环节中作者人格的谈判
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-03-30 DOI: 10.22363/2687-0088-27620
Emna Fendri, Mounir Triki
Writing at a postgraduate level is not only meant to obtain a degree in a specific field but also, and more importantly, to secure that ones research is published nationally as well as internationally. In other words, conducting research is first and foremost about making ones distinctive voice heard. Using Martin and Whites (2005) appraisal framework, the present study examines the way Tunisian MA and PhD EFL researchers in applied linguistics establish a dialogue with the reader as a persuasive tool in their texts. The comparison is meant to unveil cross-generic differences in authorial voice manifestation that distinguish postgraduate writers at different degrees. A corpus of 20 Literature Review and 20 Discussion sections taken from 10 MA and 10 PhD dissertations written in English by Tunisian EFL writers is qualitatively and quantitatively explored. Linguistic markers denoting the writers stance are identified in the corpus and are qualitatively studied using the engagement subsystem to qualify the utterance as dialogically contractive or expansive. A quantitative analysis then compares how dialogicality is manifested across the degrees and sections using SPSS. The results show that the negotiation of voice seems to be more problematic for MA researchers in both sections in comparison to PhD writers. Dialogic contraction in the MA subcorpus conveys a limited authorial positioning in the Literature Review section and a failure to stress personal contribution in the Discussion section. PhD researchers frequent reliance on expansion in both sections displays their academic maturity. The critical evaluation of previous works in the Literature Review and the claim for authorial ownership in the Discussion section distinguish them from MA writers. The comparison not only stresses the strengths that distinguish PhD writers but also points out problematic instances in establishing a dialogue with the audience in postgraduate writings. The study findings can be used to consider EFL researchers production in pedagogical contexts in terms of identity manifestation and stance-taking strategies across the different sections of the dissertation.
研究生水平的写作不仅意味着获得特定领域的学位,更重要的是,确保自己的研究成果在国内和国际上发表。换句话说,进行研究首先是要发出自己独特的声音。利用马丁和怀特(2005)的评估框架,本研究考察了突尼斯应用语言学硕士和博士英语研究人员在其文本中与读者建立对话作为说服工具的方式。这种比较是为了揭示不同程度研究生作家在写作声音表现上的跨类差异。从突尼斯英语作家用英语撰写的10篇硕士论文和10篇博士论文中提取的20篇文献综述和20篇讨论部分的语料库进行了定性和定量探索。表示作者立场的语言标记在语料库中被识别出来,并使用接合子系统进行定性研究,以将话语限定为对话收缩或扩展。定量分析,然后比较如何对话是跨度和部分表现使用SPSS。结果表明,与博士作者相比,这两个部分的硕士研究人员的声音协商似乎更成问题。MA子语料库中的对话收缩表达了作者在文献综述部分的有限定位,以及在讨论部分强调个人贡献的失败。博士研究人员频繁地依赖于这两个部分的扩展,显示了他们在学术上的成熟。在文献评论中对先前作品的批判性评价和在讨论部分对作者所有权的主张将他们与MA作家区分开来。这一比较不仅强调了博士作者的优势,而且指出了在研究生写作中与读者建立对话的问题。研究结果可以用来考虑EFL研究者在教学背景下的身份表现和立场采取策略,贯穿论文的不同部分。
{"title":"The negotiation of authorial persona in dissertations literature review and discussion sections","authors":"Emna Fendri, Mounir Triki","doi":"10.22363/2687-0088-27620","DOIUrl":"https://doi.org/10.22363/2687-0088-27620","url":null,"abstract":"Writing at a postgraduate level is not only meant to obtain a degree in a specific field but also, and more importantly, to secure that ones research is published nationally as well as internationally. In other words, conducting research is first and foremost about making ones distinctive voice heard. Using Martin and Whites (2005) appraisal framework, the present study examines the way Tunisian MA and PhD EFL researchers in applied linguistics establish a dialogue with the reader as a persuasive tool in their texts. The comparison is meant to unveil cross-generic differences in authorial voice manifestation that distinguish postgraduate writers at different degrees. A corpus of 20 Literature Review and 20 Discussion sections taken from 10 MA and 10 PhD dissertations written in English by Tunisian EFL writers is qualitatively and quantitatively explored. Linguistic markers denoting the writers stance are identified in the corpus and are qualitatively studied using the engagement subsystem to qualify the utterance as dialogically contractive or expansive. A quantitative analysis then compares how dialogicality is manifested across the degrees and sections using SPSS. The results show that the negotiation of voice seems to be more problematic for MA researchers in both sections in comparison to PhD writers. Dialogic contraction in the MA subcorpus conveys a limited authorial positioning in the Literature Review section and a failure to stress personal contribution in the Discussion section. PhD researchers frequent reliance on expansion in both sections displays their academic maturity. The critical evaluation of previous works in the Literature Review and the claim for authorial ownership in the Discussion section distinguish them from MA writers. The comparison not only stresses the strengths that distinguish PhD writers but also points out problematic instances in establishing a dialogue with the audience in postgraduate writings. The study findings can be used to consider EFL researchers production in pedagogical contexts in terms of identity manifestation and stance-taking strategies across the different sections of the dissertation.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"1 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86729385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structural and semantic congruence of Bulgarian, Russian and English set expressions: Contrastive-typological research 保加利亚语、俄语和英语集合表达的结构和语义一致性:对比类型研究
IF 0.9 0 LANGUAGE & LINGUISTICS Pub Date : 2022-03-30 DOI: 10.22363/2687-0088-26443
N. Lavrova, Alexandr O. Kozmin
The main aim of the research is to analyze the degree of isomorphism and allomorphy (congruence) of set expressions in three languages - Bulgarian, Russian and English, and to highlight the main factors that have a bearing on the typological affinity of set expressions in these languages. The procedure of the research was two-fold. At the first stage, 4000 idioms were selected from Russian, Bulgarian and English idiomatic dictionaries through the method of random sampling (1334 idioms were selected from each language). For the sake of convenience and comparison, the selected idioms were divided into 5 thematic groups. At the second stage, 850 idioms were further selected for each group through stratified and quota sampling with the aim of subsequent quantification of recurrent keywords in each group. In order to quantify the number of the most frequent keywords in each group and to measure the prevalence of assonance and alliteration, the SPSS software was utilized. The results of the research revealed that the main factors that determine isomorphism and allomorphy among idioms from Bulgarian, Russian and English are (1) typological affinity between Bulgarian and English, (2) genetic kinship, (3) borrowings from English into Russian and Bulgarian and (4) from Russian into Bulgarian, (5) shared idiomatic stock and (6) such extralinguistic factors as the universal makeup of objects and entities, for instance, the same number of functional parts. The research results are relevant for comparative phraseology, areal and contrastive typology as well and for contactology.
本研究的主要目的是分析保加利亚语、俄语和英语三种语言中集合表达的同构和异体(同余)程度,并强调影响这些语言中集合表达类型学亲和力的主要因素。这项研究的过程是双重的。第一阶段,采用随机抽样的方法,从俄语、保加利亚语和英语习语词典中抽取4000个习语(每种语言各抽取1334个习语)。为了方便比较,我们将选取的习语分为5个主题组。第二阶段,通过分层抽样和定额抽样的方法,进一步从每组中抽取850个习语,对每组中的重复关键词进行后续量化。为了量化每组中出现频率最高的关键词数量,并测量辅音和头韵的流行程度,使用SPSS软件。研究结果表明,决定保加利亚语、俄语和英语习语同构和异形的主要因素是(1)保加利亚语和英语之间的类型化亲缘关系,(2)遗传亲缘关系,(3)从英语向俄语和保加利亚语的借用,(4)从俄语向保加利亚语的借用,(5)共同的习惯用法,以及(6)诸如物体和实体的普遍构成等语言外因素,例如,相同数量的功能部件。研究结果对比较语词学、地域类型学和对比类型学以及接触学都有一定的指导意义。
{"title":"Structural and semantic congruence of Bulgarian, Russian and English set expressions: Contrastive-typological research","authors":"N. Lavrova, Alexandr O. Kozmin","doi":"10.22363/2687-0088-26443","DOIUrl":"https://doi.org/10.22363/2687-0088-26443","url":null,"abstract":"The main aim of the research is to analyze the degree of isomorphism and allomorphy (congruence) of set expressions in three languages - Bulgarian, Russian and English, and to highlight the main factors that have a bearing on the typological affinity of set expressions in these languages. The procedure of the research was two-fold. At the first stage, 4000 idioms were selected from Russian, Bulgarian and English idiomatic dictionaries through the method of random sampling (1334 idioms were selected from each language). For the sake of convenience and comparison, the selected idioms were divided into 5 thematic groups. At the second stage, 850 idioms were further selected for each group through stratified and quota sampling with the aim of subsequent quantification of recurrent keywords in each group. In order to quantify the number of the most frequent keywords in each group and to measure the prevalence of assonance and alliteration, the SPSS software was utilized. The results of the research revealed that the main factors that determine isomorphism and allomorphy among idioms from Bulgarian, Russian and English are (1) typological affinity between Bulgarian and English, (2) genetic kinship, (3) borrowings from English into Russian and Bulgarian and (4) from Russian into Bulgarian, (5) shared idiomatic stock and (6) such extralinguistic factors as the universal makeup of objects and entities, for instance, the same number of functional parts. The research results are relevant for comparative phraseology, areal and contrastive typology as well and for contactology.","PeriodicalId":53426,"journal":{"name":"Russian Journal of Linguistics","volume":"59 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84800579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Russian Journal of Linguistics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1