Measuring how much two documents differ is a basic task in the quantitative analysis of text. Because difference is a complex, interpretive concept, researchers often operationalize difference as distance, a mathematical function that represents documents through a metaphor of physical space. Yet the constraints of that metaphor mean that distance can only capture some of the ways that documents can relate to each other. We show how a more general concept, divergence, can help solve this problem, alerting us to new ways in which documents can relate to each other. In contrast to distance, divergence can capture enclosure relationships, where two documents differ because the patterns found in one are a partial subset of those in the other, and the emergence of shortcuts, where two documents can be brought closer through mediation by a third. We provide an example of this difference measure, Kullback–Leibler Divergence, and apply it to two worked examples: the presentation of scientific arguments in Charles Darwin’s Origin of Species (1859) and the rhetorical structure of philosophical texts by Aristotle, David Hume, and Immanuel Kant. These examples illuminate the complex relationship between time and what we refer to as an archive’s “enclosure architecture”, and show how divergence can be used in the quantitative analysis of historical, literary, and cultural texts to reveal cognitive structures invisible to spatial metaphors.
{"title":"Divergence and the Complexity of Difference in Text and Culture","authors":"Kent K. Chang, S. Dedeo","doi":"10.22148/001c.17585","DOIUrl":"https://doi.org/10.22148/001c.17585","url":null,"abstract":"Measuring how much two documents differ is a basic task in the quantitative analysis of text. Because difference is a complex, interpretive concept, researchers often operationalize difference as distance, a mathematical function that represents documents through a metaphor of physical space. Yet the constraints of that metaphor mean that distance can only capture some of the ways that documents can relate to each other. We show how a more general concept, divergence, can help solve this problem, alerting us to new ways in which documents can relate to each other. In contrast to distance, divergence can capture enclosure relationships, where two documents differ because the patterns found in one are a partial subset of those in the other, and the emergence of shortcuts, where two documents can be brought closer through mediation by a third. We provide an example of this difference measure, Kullback–Leibler Divergence, and apply it to two worked examples: the presentation of scientific arguments in Charles Darwin’s Origin of Species (1859) and the rhetorical structure of philosophical texts by Aristotle, David Hume, and Immanuel Kant. These examples illuminate the complex relationship between time and what we refer to as an archive’s “enclosure architecture”, and show how divergence can be used in the quantitative analysis of historical, literary, and cultural texts to reveal cognitive structures invisible to spatial metaphors.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42978443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Until recently the field of natural language generation relied upon formalized grammar systems, small-scale statistical models, and lengthy sets of heuristic rules. This older technology was fairly limited and brittle: it could remix language into word salad poems or chat with humans within narrowly defined topics. Recently, very large-scale statistical language models have dramatically advanced the field, and GPT-3 is just one example. It can internalize the rules of language without explicit programming or rules. Instead, much like a human child, GPT-3 learns language through repeated exposure, albeit on a much larger scale. Without explicit rules, it can sometimes fail at the simplest of linguistic tasks, but it can also excel at more difficult ones like imitating an author or waxing philosophical.
{"title":"Can GPT-3 Pass a Writer’s Turing Test?","authors":"Katherine Elkins, Jon Chun","doi":"10.22148/001C.17212","DOIUrl":"https://doi.org/10.22148/001C.17212","url":null,"abstract":"Until recently the field of natural language generation relied upon formalized grammar systems, small-scale statistical models, and lengthy sets of heuristic rules. This older technology was fairly limited and brittle: it could remix language into word salad poems or chat with humans within narrowly defined topics. Recently, very large-scale statistical language models have dramatically advanced the field, and GPT-3 is just one example. It can internalize the rules of language without explicit programming or rules. Instead, much like a human child, GPT-3 learns language through repeated exposure, albeit on a much larger scale. Without explicit rules, it can sometimes fail at the simplest of linguistic tasks, but it can also excel at more difficult ones like imitating an author or waxing philosophical.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48493398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article discusses ways that dramatic structure can be analyzed through the use of social titles in Shakespeare’s plays. Freytag’s (1863) pyramid of dramatic structure is based on patterns he found in Shakespearean and Greek tragedy; more recently, computational methods are being employed to model narrative structure at scale. However, there has not yet been a study which discusses whether or not specific lexical items can be indicative of dramatic structure. Using Shakespeare’s plays as an example, this essay fills the gap by observing how social titles can be used to explore the viability of narrative structure.
{"title":"Dramatic Structure and Social Status in Shakespeare’s Plays","authors":"Heather Froehlich","doi":"10.22148/001c.12556","DOIUrl":"https://doi.org/10.22148/001c.12556","url":null,"abstract":"This article discusses ways that dramatic structure can be analyzed through the use of social titles in Shakespeare’s plays. Freytag’s (1863) pyramid of dramatic structure is based on patterns he found in Shakespearean and Greek tragedy; more recently, computational methods are being employed to model narrative structure at scale. However, there has not yet been a study which discusses whether or not specific lexical items can be indicative of dramatic structure. Using Shakespeare’s plays as an example, this essay fills the gap by observing how social titles can be used to explore the viability of narrative structure.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42051166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana Jofre, Vincent J. Berardi, Carl Bennett, M. Reale, Josh Cole
We present metadata of labeled faces extracted from a Time magazine archive that contains 3,389 issues ranging from 1923 to 2012. The data we are publishing consists of three subsets: Dataset 1) the gender labels and image characteristics for each of the 327,322 faces that were automatically-extracted from the entire Time archive, Dataset 2) a subset of 8,789 faces from a sample of 100 issues that were labeled by Amazon Mechanical Turk (AMT) workers according to ten dimensions (including gender) and used as training data to produce Dataset 1, and Dataset 3) the raw data collected from the AMT workers before being processed to produce Dataset 2.
{"title":"Faces extracted from Time Magazine 1923-2014","authors":"Ana Jofre, Vincent J. Berardi, Carl Bennett, M. Reale, Josh Cole","doi":"10.22148/001c.12265","DOIUrl":"https://doi.org/10.22148/001c.12265","url":null,"abstract":"We present metadata of labeled faces extracted from a Time magazine archive that contains 3,389 issues ranging from 1923 to 2012. The data we are publishing consists of three subsets: Dataset 1) the gender labels and image characteristics for each of the 327,322 faces that were automatically-extracted from the entire Time archive, Dataset 2) a subset of 8,789 faces from a sample of 100 issues that were labeled by Amazon Mechanical Turk (AMT) workers according to ten dimensions (including gender) and used as training data to produce Dataset 1, and Dataset 3) the raw data collected from the AMT workers before being processed to produce Dataset 2.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45366414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At the center of Nan Z. Da's article is the claim that quantitative methods cannot produce any useful insights with respect to literary texts: "CLS's methodology and premises are similar to those used in professional sectors (if more primitive), but they are missing economic or mathematical justification for their drastic reduction of literary, literaryhistorical, and linguistic complexity. In these other sectors where we are truly dealing with large data sets, the purposeful reduction of features like nuance, lexical variance, and grammatical complexity is desirable (for that industry's standards and goals). In literary studies, there is no rationale for such reductionism; in fact, the discipline is about reducing
{"title":"On the perceived complexity of literature. A response to Nan Z. Da","authors":"Fotis Jannidis","doi":"10.22148/001c.11829","DOIUrl":"https://doi.org/10.22148/001c.11829","url":null,"abstract":"At the center of Nan Z. Da's article is the claim that quantitative methods cannot produce any useful insights with respect to literary texts: \"CLS's methodology and premises are similar to those used in professional sectors (if more primitive), but they are missing economic or mathematical justification for their drastic reduction of literary, literaryhistorical, and linguistic complexity. In these other sectors where we are truly dealing with large data sets, the purposeful reduction of features like nuance, lexical variance, and grammatical complexity is desirable (for that industry's standards and goals). In literary studies, there is no rationale for such reductionism; in fact, the discipline is about reducing","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45657207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ambition of scholarship in the humanities is to systematically understand the human condition in all its aspects and times. To this end, humanists are more apt to interpret specific phenomena than generalize to previously unseen observations. When we consider scholarship as a collective effort, this has consequences. I argue that most of the humanities rely on a distinct social contract. This contract states that interpretive arguments are expected to be plausible and the grounds on which they are made, verifiable. This is the scholarly purpose (albeit not the rhetorical one) of most of what goes in our footnotes, especially references. Reference verification is mostly a virtual act, i.e., it all too rarely happens in practice, yet it is in principle always possible. Any individual scholar in any domain in the humanities can, by virtue of this contract, verify the evidence supporting any argument in a non-mediated way. This is essential to, at the very least, distinguish between solid and haphazard arguments.
{"title":"Are We Breaking the Social Contract?","authors":"Giovanni Colavizza","doi":"10.22148/001c.11828","DOIUrl":"https://doi.org/10.22148/001c.11828","url":null,"abstract":"The ambition of scholarship in the humanities is to systematically understand the human condition in all its aspects and times. To this end, humanists are more apt to interpret specific phenomena than generalize to previously unseen observations. When we consider scholarship as a collective effort, this has consequences. I argue that most of the humanities rely on a distinct social contract. This contract states that interpretive arguments are expected to be plausible and the grounds on which they are made, verifiable. This is the scholarly purpose (albeit not the rhetorical one) of most of what goes in our footnotes, especially references. Reference verification is mostly a virtual act, i.e., it all too rarely happens in practice, yet it is in principle always possible. Any individual scholar in any domain in the humanities can, by virtue of this contract, verify the evidence supporting any argument in a non-mediated way. This is essential to, at the very least, distinguish between solid and haphazard arguments.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42757997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-23DOI: 10.5040/9781474218573.0085
blah blah
{"title":"Dummy","authors":"blah blah","doi":"10.5040/9781474218573.0085","DOIUrl":"https://doi.org/10.5040/9781474218573.0085","url":null,"abstract":"","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45848849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the "replication crisis." By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically "significant" results for publication; and 3) a tendency to not make data and code available for others to use. A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the "replication crisis."1 By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically "significant" results for publication; and 3) a tendency to not make data and code available for others to use. What this means in more straightforward language is that researchers (and the public) overwhelmingly focus on "positive" results; they tend to over-estimate how strong their results are (how large a difference some variable or combination of variables makes); and they bury a considerable amount of decisions/judgments in their research process that have an impact on the outcomes. The graph in Figure 1 down below represents the first two dimensions of this problem in very succinct form (see Simmons et al for a discussion of the third).2 Why does this matter for Cultural Analytics? After all, much of the work in CA is insulated from problem #1 (low power) because of the often large sample sizes used. Even small effects are mostly going to be reproducible with large enough samples. Many will also rightly point out that a focus on significance testing is not always at the heart of interpretive research. Regardless of the number of texts used, researchers often take a more descriptive or exploratory approach to their documents, where the idea of "null" models makes less sense. And problem #3 is dealt with through a code and data repository that accompanies most articles (at least in CA and at least in most cases). J O U R N A L O F C U L T U R A L A N A L Y T I C S 2 But these caveats overlook a larger and more systemic problem that has to do with selection bias towards positive results. Whether you are doing significance testing or just saying you have found something "interesting," the emphasis in publication is almost always on finding something "positive." This is as much a part of the culture of academic publishing as it is the current moment in the shift towards data-driven approaches for studying culture. There is enormous pressure in the field to report something positive -that a method "worked" or "shows" something. One of the enduring critiques of new computational methods is that they "don't show us anything we didn't already know." While many would disagree (rightly pointing to positive examples of new knowledge) or see this as a classic case of "hindsight bias" (our
{"title":"Send us your null results","authors":"A. Piper","doi":"10.22148/001c.11824","DOIUrl":"https://doi.org/10.22148/001c.11824","url":null,"abstract":"A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the \"replication crisis.\" By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically \"significant\" results for publication; and 3) a tendency to not make data and code available for others to use. A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the \"replication crisis.\"1 By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically \"significant\" results for publication; and 3) a tendency to not make data and code available for others to use. What this means in more straightforward language is that researchers (and the public) overwhelmingly focus on \"positive\" results; they tend to over-estimate how strong their results are (how large a difference some variable or combination of variables makes); and they bury a considerable amount of decisions/judgments in their research process that have an impact on the outcomes. The graph in Figure 1 down below represents the first two dimensions of this problem in very succinct form (see Simmons et al for a discussion of the third).2 Why does this matter for Cultural Analytics? After all, much of the work in CA is insulated from problem #1 (low power) because of the often large sample sizes used. Even small effects are mostly going to be reproducible with large enough samples. Many will also rightly point out that a focus on significance testing is not always at the heart of interpretive research. Regardless of the number of texts used, researchers often take a more descriptive or exploratory approach to their documents, where the idea of \"null\" models makes less sense. And problem #3 is dealt with through a code and data repository that accompanies most articles (at least in CA and at least in most cases). J O U R N A L O F C U L T U R A L A N A L Y T I C S 2 But these caveats overlook a larger and more systemic problem that has to do with selection bias towards positive results. Whether you are doing significance testing or just saying you have found something \"interesting,\" the emphasis in publication is almost always on finding something \"positive.\" This is as much a part of the culture of academic publishing as it is the current moment in the shift towards data-driven approaches for studying culture. There is enormous pressure in the field to report something positive -that a method \"worked\" or \"shows\" something. One of the enduring critiques of new computational methods is that they \"don't show us anything we didn't already know.\" While many would disagree (rightly pointing to positive examples of new knowledge) or see this as a classic case of \"hindsight bias\" (our","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47706636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The guideline under review builds on the acquired knowledge of the field of narrative theory. Its main references are to classical structuralist narratology, both in terms of definitions (Todorov, Genette, Dolezel) and by way of its guiding principles, which strive for simplicity, hierarchy, minimal interpretation and a strict focus on the annotation of text-intrinsic, linguistic aspects of narrative. Most recent attempts to do “computational narratology” have been similarly “structuralist” in outlook, albeit with a stronger focus on aspects of story grammar: the basis constituents of the story are to some extent hard-coded into the language of any story, and are thus more easily formalized. The present guideline goes well beyond this restriction to story grammar. In fact, the guideline promises to tackle aspects of narrative transmission from the highest level (author) to the lowest (character), but also demarcation of scenes at the level of plot, as well as focalisation. Thus, the guideline can be said to be very wide in scope.
{"title":"Annotating Narrative Levels: Review of Guideline No. 7","authors":"Gunther Martens","doi":"10.22148/001c.11775","DOIUrl":"https://doi.org/10.22148/001c.11775","url":null,"abstract":"The guideline under review builds on the acquired knowledge of the field of narrative theory. Its main references are to classical structuralist narratology, both in terms of definitions (Todorov, Genette, Dolezel) and by way of its guiding principles, which strive for simplicity, hierarchy, minimal interpretation and a strict focus on the annotation of text-intrinsic, linguistic aspects of narrative. Most recent attempts to do “computational narratology” have been similarly “structuralist” in outlook, albeit with a stronger focus on aspects of story grammar: the basis constituents of the story are to some extent hard-coded into the language of any story, and are thus more easily formalized. The present guideline goes well beyond this restriction to story grammar. In fact, the guideline promises to tackle aspects of narrative transmission from the highest level (author) to the lowest (character), but also demarcation of scenes at the level of plot, as well as focalisation. Thus, the guideline can be said to be very wide in scope.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46023525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
“Let me tell you a story.” The proposed guidelines suggest that this phrase serve as the heuristic that readers supply at the beginning of any possible embedded narrative to identify a shift in narrative frames or levels. (The difference between “frame” and “level,” although perhaps confusing in the history of narratology, does not seem like an important distinction at this stage of the project.) This simple phrase, the author suggests, can replace a field of narrative theory they feel would “simply confuse my student annotators.” However simple the phrase might seem, however, it, in fact, conceals a number of key narratological issues: focalization, temporal indices, diction / register, person, fictional paratexts, duration, and, no doubt, others. The question for the guidelines is whether one can leapfrog the particularity of these issues if students use the above phrase to annotate texts with XML tags and produce operational scripts that identify the nested narratives. As it currently stands, students seem capable of learning the basic idea of nested narratives and tagging changes in narrative frames, but there are no real results to confirm the project’s success, as the author reports they are not yet able to confirm any inter-annotation agreement.
{"title":"Annotating Narrative Levels: Review of Guideline No. 8","authors":"T. McEnaney","doi":"10.22148/16.064","DOIUrl":"https://doi.org/10.22148/16.064","url":null,"abstract":"“Let me tell you a story.” The proposed guidelines suggest that this phrase serve\u0000as the heuristic that readers supply at the beginning of any possible embedded\u0000narrative to identify a shift in narrative frames or levels. (The difference between\u0000“frame” and “level,” although perhaps confusing in the history of narratology,\u0000does not seem like an important distinction at this stage of the project.) This\u0000simple phrase, the author suggests, can replace a field of narrative theory they\u0000feel would “simply confuse my student annotators.” However simple the phrase\u0000might seem, however, it, in fact, conceals a number of key narratological issues:\u0000focalization, temporal indices, diction / register, person, fictional paratexts, duration,\u0000and, no doubt, others. The question for the guidelines is whether one\u0000can leapfrog the particularity of these issues if students use the above phrase to\u0000annotate texts with XML tags and produce operational scripts that identify the\u0000nested narratives. As it currently stands, students seem capable of learning the\u0000basic idea of nested narratives and tagging changes in narrative frames, but there\u0000are no real results to confirm the project’s success, as the author reports they are\u0000not yet able to confirm any inter-annotation agreement.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47837666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}