Support verb constructions figure among the most frequently investigated topics in the literature on collocation. So far, most studies of this kind have focused on bipartite structures, consisting of a verbal collocate and a nominal base. Accordingly, the analysis of how support verbs are distributed has concentrated almost exclusively on the lexical control exerted by the base. In this article, we draw attention towards the influence exerted by the participation of verb and noun in more complex patterns of lexical co-occurrence. We contend that the distribution of the support verb collocate is contingent not only on the base noun but also on other elements of the lexical context. This highlights the need to enrich the theoretical framework of collocation analysis with the additional descriptive category of ‘second-order collocate’. The proposal is illustrated with two case studies using a large-scale web corpus of English.
{"title":"Beyond base and collocate","authors":"P. Cantos, Moisés Almela-Sánchez","doi":"10.1075/IJCL.18072.CAN","DOIUrl":"https://doi.org/10.1075/IJCL.18072.CAN","url":null,"abstract":"\u0000 Support verb constructions figure among the most frequently investigated topics in the literature on collocation.\u0000 So far, most studies of this kind have focused on bipartite structures, consisting of a verbal collocate and a nominal base.\u0000 Accordingly, the analysis of how support verbs are distributed has concentrated almost exclusively on the lexical control exerted\u0000 by the base. In this article, we draw attention towards the influence exerted by the participation of verb and noun in more\u0000 complex patterns of lexical co-occurrence. We contend that the distribution of the support verb collocate is contingent not only\u0000 on the base noun but also on other elements of the lexical context. This highlights the need to enrich the theoretical framework\u0000 of collocation analysis with the additional descriptive category of ‘second-order collocate’. The proposal is illustrated with two\u0000 case studies using a large-scale web corpus of English.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48634490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper examines the use of the three non-periphrastic subjunctives in Spanish in embedded clauses under obligatory subjunctive predicates in the past tense in three Spanish varieties: Argentinean, Mexican and Peninsular Spanish. By means of random forest and logistic regression analyses, I demonstrate that a grammar where the two “past” subjunctives make up one group, such that the variation can be modeled on a binary opposition between (morphologically) past vs. (morphologically) present, achieves better prediction accuracy and goodness-of-fit parameters than a grammar with a three-way split. The results suggest that, at least in complement clauses of obligatory subjunctive predicates, there appear to be no semantic differences between the two past subjunctives but there are still relatively large differences in how the three subjunctive forms are used across the three Spanish varieties studied.1
{"title":"Two subjunctives or three?","authors":"Gustavo Guajardo","doi":"10.1075/IJCL.19130.GUA","DOIUrl":"https://doi.org/10.1075/IJCL.19130.GUA","url":null,"abstract":"\u0000This paper examines the use of the three non-periphrastic subjunctives in Spanish in embedded clauses under obligatory subjunctive predicates in the past tense in three Spanish varieties: Argentinean, Mexican and Peninsular Spanish. By means of random forest and logistic regression analyses, I demonstrate that a grammar where the two “past” subjunctives make up one group, such that the variation can be modeled on a binary opposition between (morphologically) past vs. (morphologically) present, achieves better prediction accuracy and goodness-of-fit parameters than a grammar with a three-way split. The results suggest that, at least in complement clauses of obligatory subjunctive predicates, there appear to be no semantic differences between the two past subjunctives but there are still relatively large differences in how the three subjunctive forms are used across the three Spanish varieties studied.1\u0000","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49057864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article provides a corpus-based investigation into shell nouns. Shell nouns perform a variety of referential functions and express speaker stance. The investigation was motivated by the fact that past research in this area has been primarily based on written texts. Very little is known about the use of shell nouns in speech. The study used the ICE-GB corpus of contemporary British English and investigated cataphoric shell nouns complemented by appositive that-clauses across fine-grained spoken and written registers. It has revealed that the deployment of shell nouns is governed by the principle of register formality definable in terms of contextual configurations of the Field-Tenor-Mode complex rather than the mode of production. Additionally, the study has uncovered the frequent use of a small core set of shell nouns common across speech and writing. Hence it argues that shell nouns are part and parcel of spoken and written discourse and that they pertain more to grammar than to lexis.
{"title":"Shell nouns as register-specific discourse devices","authors":"A. Fang, Min Dong","doi":"10.1075/IJCL.19059.FAN","DOIUrl":"https://doi.org/10.1075/IJCL.19059.FAN","url":null,"abstract":"\u0000This article provides a corpus-based investigation into shell nouns. Shell nouns perform a variety of referential functions and express speaker stance. The investigation was motivated by the fact that past research in this area has been primarily based on written texts. Very little is known about the use of shell nouns in speech. The study used the ICE-GB corpus of contemporary British English and investigated cataphoric shell nouns complemented by appositive that-clauses across fine-grained spoken and written registers. It has revealed that the deployment of shell nouns is governed by the principle of register formality definable in terms of contextual configurations of the Field-Tenor-Mode complex rather than the mode of production. Additionally, the study has uncovered the frequent use of a small core set of shell nouns common across speech and writing. Hence it argues that shell nouns are part and parcel of spoken and written discourse and that they pertain more to grammar than to lexis.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49296994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the creation and use of the Coronavirus Corpus, which is currently (March 2021) 900 million words in size, and which will probably be about one billion words in size by May–June 2021. The Coronavirus Corpus is a subset of the NOW Corpus (News on the Web), which is currently about 12.1 billion words in size and which grows by about two billion words each year. These two corpora are updated every night, with about 6–10 million words for NOW and 2–3 million words for the Coronavirus Corpus. The Coronavirus Corpus allows users to see the frequency of words and phrases over time (even by individual day), and users can find all words that are more frequent in one time period than another. Users can also see the collocates for words and phrases, and compare the collocates to see what is being said about particular topics over time.
{"title":"The Coronavirus Corpus","authors":"Mark Davies","doi":"10.1075/IJCL.21044.DAV","DOIUrl":"https://doi.org/10.1075/IJCL.21044.DAV","url":null,"abstract":"\u0000This paper discusses the creation and use of the Coronavirus Corpus, which is currently (March 2021) 900 million words in size, and which will probably be about one billion words in size by May–June 2021. The Coronavirus Corpus is a subset of the NOW Corpus (News on the Web), which is currently about 12.1 billion words in size and which grows by about two billion words each year. These two corpora are updated every night, with about 6–10 million words for NOW and 2–3 million words for the Coronavirus Corpus. The Coronavirus Corpus allows users to see the frequency of words and phrases over time (even by individual day), and users can find all words that are more frequent in one time period than another. Users can also see the collocates for words and phrases, and compare the collocates to see what is being said about particular topics over time.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48840297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Corpus data provide evidence of the patterning of language, and one way word usage can be analysed is through the study of concordance lines. While popular concordancers provide different sorting methods, they are typically only able to display lines in the order in which they occur in the corpus, randomly, or alphabetically by words in slots to the left or right of the word of interest. Less sophisticated users may find recognising patterns from these orderings quite challenging. This paper considers possible needs of language learners in terms of concordance ranking and introduces two methods which have been adopted and developed for The Prime Machine. The first method uses repeated patterns, measuring the number of matches made with other lines in the set. The second method incorporates collocation scores, providing examples with strong collocations from the entire corpus at the top of sampled concordance lines.
语料库数据提供了语言模式的证据,分析单词用法的一种方法是通过研究一致性线。虽然流行的concordancer提供了不同的排序方法,但它们通常只能按照语料库中出现的顺序、随机或按兴趣单词的左侧或右侧插槽中的单词的字母顺序显示行。不太熟练的用户可能会发现从这些顺序中识别模式非常具有挑战性。本文考虑了语言学习者在一致性排序方面的可能需求,并介绍了两种为The Prime Machine所采用和开发的方法。第一种方法使用重复的图案,测量与集合中其他线条匹配的次数。第二种方法结合搭配得分,在采样的一致性线的顶部提供整个语料库中具有强搭配的例子。
{"title":"Concordance line sorting in The Prime Machine","authors":"Stephen Jeaco","doi":"10.1075/IJCL.18056.JEA","DOIUrl":"https://doi.org/10.1075/IJCL.18056.JEA","url":null,"abstract":"\u0000 Corpus data provide evidence of the patterning of language, and one way word usage can be analysed is through the\u0000 study of concordance lines. While popular concordancers provide different sorting methods, they are typically only able to display\u0000 lines in the order in which they occur in the corpus, randomly, or alphabetically by words in slots to the left or right of the\u0000 word of interest. Less sophisticated users may find recognising patterns from these orderings quite challenging. This paper\u0000 considers possible needs of language learners in terms of concordance ranking and introduces two methods which have been adopted\u0000 and developed for The Prime Machine. The first method uses repeated patterns, measuring the number of matches\u0000 made with other lines in the set. The second method incorporates collocation scores, providing examples with strong collocations\u0000 from the entire corpus at the top of sampled concordance lines.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44258542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper formulates and evaluates a series of multi-unit measures of directional association, building on the pairwise {Delta}P measure, that are able to quantify association in sequences of varying length and type of representation. Multi-unit measures face an additional segmentation problem: once the implicit length constraint of pairwise measures is abandoned, association measures must also identify the borders of meaningful sequences. This paper takes a vector-based approach to the segmentation problem by using 18 unique measures to describe different aspects of multi-unit association. An examination of these measures across eight languages shows that they are stable across languages and that each provides a unique rank of associated sequences. Taken together, these measures expand corpus-based approaches to association by generalizing across varying lengths and types of representation.
{"title":"Multi-Unit Directional Measures of Association: Moving Beyond Pairs of Words","authors":"J. Dunn","doi":"10.1075/ijcl.16098.dun","DOIUrl":"https://doi.org/10.1075/ijcl.16098.dun","url":null,"abstract":"This paper formulates and evaluates a series of multi-unit measures of directional association, building on the pairwise {Delta}P measure, that are able to quantify association in sequences of varying length and type of representation. Multi-unit measures face an additional segmentation problem: once the implicit length constraint of pairwise measures is abandoned, association measures must also identify the borders of meaningful sequences. This paper takes a vector-based approach to the segmentation problem by using 18 unique measures to describe different aspects of multi-unit association. An examination of these measures across eight languages shows that they are stable across languages and that each provides a unique rank of associated sequences. Taken together, these measures expand corpus-based approaches to association by generalizing across varying lengths and types of representation.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"58653232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Covid-19, the greatest global health crisis for a century, brought a new immediacy and urgency to international bio-medical research. The pandemic generated intense competition to produce a vaccine and contain the virus, creating what the World Health Organization referred to as an ‘infodemic’ of published output. In this frantic atmosphere, researchers were keen to get their research noticed. In this paper, we explore whether this enthusiasm influenced the rhetorical presentation of research and encouraged scientists to “sell” their studies. Examining a corpus of the most highly cited SCI articles on the virus published in the first seven months of 2020, we explore authors’ use of hyperbolic and promotional language to boost aspects of their research. Our results show a significant increase in hype to stress certainty, contribution, novelty and potential, especially regarding research methods, outcomes and primacy. Our study sheds light on scientific persuasion at a time of intense social anxiety.
Covid-19是一个世纪以来最严重的全球卫生危机,给国际生物医学研究带来了新的紧迫性和紧迫性。大流行引发了生产疫苗和控制病毒的激烈竞争,造成了世界卫生组织(World Health Organization)所称的出版成果的“信息大流行”。在这种疯狂的氛围中,研究人员渴望让他们的研究得到关注。在本文中,我们探讨了这种热情是否影响了研究的修辞表达,并鼓励科学家“出售”他们的研究。我们研究了2020年前7个月发表的关于该病毒的SCI文章中被引用次数最多的文章,探讨了作者使用夸张和宣传语言来提升他们研究的各个方面。我们的研究结果显示,强调确定性、贡献、新颖性和潜力的炒作显著增加,特别是在研究方法、结果和首要地位方面。我们的研究揭示了在强烈的社会焦虑时期的科学说服。
{"title":"The Covid infodemic","authors":"Ken Hyland, F. Jiang","doi":"10.1075/IJCL.20160.HYL","DOIUrl":"https://doi.org/10.1075/IJCL.20160.HYL","url":null,"abstract":"\u0000Covid-19, the greatest global health crisis for a century, brought a new immediacy and urgency to international bio-medical research. The pandemic generated intense competition to produce a vaccine and contain the virus, creating what the World Health Organization referred to as an ‘infodemic’ of published output. In this frantic atmosphere, researchers were keen to get their research noticed. In this paper, we explore whether this enthusiasm influenced the rhetorical presentation of research and encouraged scientists to “sell” their studies. Examining a corpus of the most highly cited SCI articles on the virus published in the first seven months of 2020, we explore authors’ use of hyperbolic and promotional language to boost aspects of their research. Our results show a significant increase in hype to stress certainty, contribution, novelty and potential, especially regarding research methods, outcomes and primacy. Our study sheds light on scientific persuasion at a time of intense social anxiety.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43817505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper discusses the creation and use of the TV Corpus (subtitles from 75,000 episodes, 325 million words, 6 English-speaking countries, 1950s-2010s) and the Movies Corpus (subtitles from 25,000 movies, 200 million words, 6 English-speaking countries, 1930s–2010s), which are available at English-Corpora.org. The corpora compare well to the BNC-Conversation data in terms of informality, lexis, phraseology, and syntax. But at 525 million words in total size, they are more than 30 times as large as BNC-Conversation (both BNC1994 and BNC2014 combined), which means that they can be used to look at a wide range of linguistic phenomena. The TV and Movies corpora also allow useful comparisons of very informal language across time (containing texts from the 1930s and later for the movies, and from the 1950s onwards for TV shows) and between dialects of English (such as British and American English).
{"title":"The TV and Movies corpora","authors":"Mark Davies","doi":"10.1075/IJCL.00035.DAV","DOIUrl":"https://doi.org/10.1075/IJCL.00035.DAV","url":null,"abstract":"Abstract This paper discusses the creation and use of the TV Corpus (subtitles from 75,000 episodes, 325 million words, 6 English-speaking countries, 1950s-2010s) and the Movies Corpus (subtitles from 25,000 movies, 200 million words, 6 English-speaking countries, 1930s–2010s), which are available at English-Corpora.org. The corpora compare well to the BNC-Conversation data in terms of informality, lexis, phraseology, and syntax. But at 525 million words in total size, they are more than 30 times as large as BNC-Conversation (both BNC1994 and BNC2014 combined), which means that they can be used to look at a wide range of linguistic phenomena. The TV and Movies corpora also allow useful comparisons of very informal language across time (containing texts from the 1930s and later for the movies, and from the 1950s onwards for TV shows) and between dialects of English (such as British and American English).","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2021-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41628555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This study explores marked affixation as a possible cue for characterization in scripted television dialogue. The data used here is the newly compiled TV Corpus, which encompasses over 265 million words in its North American English context. An initial corpus-based analysis quantifies the innovative use of affixes in word-formation processes across the corpus to allow for comparison with a following character analysis, which investigates how derivational word-formation supports characterization patterns within a specific series, Buffy the Vampire Slayer. For this, a list of productive prefixes (e.g. de-, un-) and suffixes (e.g. -y, -ish) is used to elicit relevant contexts. The study thus combines two approaches to word-formation processes in scripted contexts. On a large scale, it shows how derivational neologisms are spread across TV dialogue and on a much smaller scale, it highlights particular instances where these neologisms are used to aid character construction.
{"title":"Innovation on screen","authors":"Susan A. Reichelt","doi":"10.1075/IJCL.00038.REI","DOIUrl":"https://doi.org/10.1075/IJCL.00038.REI","url":null,"abstract":"Abstract This study explores marked affixation as a possible cue for characterization in scripted television dialogue. The data used here is the newly compiled TV Corpus, which encompasses over 265 million words in its North American English context. An initial corpus-based analysis quantifies the innovative use of affixes in word-formation processes across the corpus to allow for comparison with a following character analysis, which investigates how derivational word-formation supports characterization patterns within a specific series, Buffy the Vampire Slayer. For this, a list of productive prefixes (e.g. de-, un-) and suffixes (e.g. -y, -ish) is used to elicit relevant contexts. The study thus combines two approaches to word-formation processes in scripted contexts. On a large scale, it shows how derivational neologisms are spread across TV dialogue and on a much smaller scale, it highlights particular instances where these neologisms are used to aid character construction.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48191280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi-Feng Huang, Akira Murakami, T. Alexopoulou, A. Korhonen
Abstract As large-scale learner corpora become increasingly available, it is vital that natural language processing (NLP) technology is developed to provide rich linguistic annotations necessary for second language (L2) research. We present a system for automatically analyzing subcategorization frames (SCFs) for learner English. SCFs link lexis with morphosyntax, shedding light on the interplay between lexical and structural information in learner language. Meanwhile, SCFs are crucial to the study of a wide range of phenomena including individual verbs, verb classes and varying syntactic structures. To illustrate the usefulness of our system for learner corpus research and second language acquisition (SLA), we investigate how L2 learners diversify their use of SCFs in text and how this diversity changes with L2 proficiency.
{"title":"Subcategorization frame identification for learner English","authors":"Yi-Feng Huang, Akira Murakami, T. Alexopoulou, A. Korhonen","doi":"10.1075/ijcl.18097.hua","DOIUrl":"https://doi.org/10.1075/ijcl.18097.hua","url":null,"abstract":"Abstract As large-scale learner corpora become increasingly available, it is vital that natural language processing (NLP) technology is developed to provide rich linguistic annotations necessary for second language (L2) research. We present a system for automatically analyzing subcategorization frames (SCFs) for learner English. SCFs link lexis with morphosyntax, shedding light on the interplay between lexical and structural information in learner language. Meanwhile, SCFs are crucial to the study of a wide range of phenomena including individual verbs, verb classes and varying syntactic structures. To illustrate the usefulness of our system for learner corpus research and second language acquisition (SLA), we investigate how L2 learners diversify their use of SCFs in text and how this diversity changes with L2 proficiency.","PeriodicalId":46843,"journal":{"name":"International Journal of Corpus Linguistics","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2020-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41839430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}