将您的空结果发送给我们

Q1 Arts and Humanities Journal of Cultural Analytics Pub Date : 2020-01-22 DOI:10.22148/001c.11824
A. Piper
{"title":"将您的空结果发送给我们","authors":"A. Piper","doi":"10.22148/001c.11824","DOIUrl":null,"url":null,"abstract":"A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the \"replication crisis.\" By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically \"significant\" results for publication; and 3) a tendency to not make data and code available for others to use. A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the \"replication crisis.\"1 By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically \"significant\" results for publication; and 3) a tendency to not make data and code available for others to use. What this means in more straightforward language is that researchers (and the public) overwhelmingly focus on \"positive\" results; they tend to over-estimate how strong their results are (how large a difference some variable or combination of variables makes); and they bury a considerable amount of decisions/judgments in their research process that have an impact on the outcomes. The graph in Figure 1 down below represents the first two dimensions of this problem in very succinct form (see Simmons et al for a discussion of the third).2 Why does this matter for Cultural Analytics? After all, much of the work in CA is insulated from problem #1 (low power) because of the often large sample sizes used. Even small effects are mostly going to be reproducible with large enough samples. Many will also rightly point out that a focus on significance testing is not always at the heart of interpretive research. Regardless of the number of texts used, researchers often take a more descriptive or exploratory approach to their documents, where the idea of \"null\" models makes less sense. And problem #3 is dealt with through a code and data repository that accompanies most articles (at least in CA and at least in most cases). J O U R N A L O F C U L T U R A L A N A L Y T I C S 2 But these caveats overlook a larger and more systemic problem that has to do with selection bias towards positive results. Whether you are doing significance testing or just saying you have found something \"interesting,\" the emphasis in publication is almost always on finding something \"positive.\" This is as much a part of the culture of academic publishing as it is the current moment in the shift towards data-driven approaches for studying culture. There is enormous pressure in the field to report something positive -that a method \"worked\" or \"shows\" something. One of the enduring critiques of new computational methods is that they \"don't show us anything we didn't already know.\" While many would disagree (rightly pointing to positive examples of new knowledge) or see this as a classic case of \"hindsight bias\" (our colleagues' ability to magically always be right), it is actually true that in most cases these methods don't show us anything at all. It's just that you don't hear about those cases. If we were to take the set of all experiments ever conducted with a computer on some texts, I would expect that in (at least) 95% of those cases the procedure yielded no insight of interest. In other words, positive results would be very rare. And yet, miraculously, all articles in CA report a positive result (mine included). To be fair, this is true of literally all literary and cultural studies. No one to my knowledge has ever published an article that said, I read a lot of books or watched a lot of television shows and it turns out my ideas about them weren't significant. But this too happens all the time. We just never hear about it. It's time to change that culture. Researchers in other fields have made a variety of suggestions to address this issue, including pre-submitting articles prior to completion so acceptance isn't biased towards positive results, to making the research process as open and transparent as possible.3 At CA, we want to start by encouraging submission of pieces that don't show positive results, however broadly defined. This can be another way that the journal CA, but also work in cultural analytics more broadly, can begin to change research culture in the humanities and cultural studies. It means not only changing the scale of our evidence considered or making our judgments more transparent and testable. It also means being more transparent about all the cases where our efforts yield no discernible effect or insight. As others have called for, it is time to embrace failure as an epistemic good.4 This may be CA's most radical gesture yet in changing the culture of research in the field of cultural studies. J O U R N A L O F C U L T U R A L A N A L Y T I C S 3 So let me open the floodgates here: we pledge to publish your null result. By null result, I mean either something that shows no statistical significance (i.e. using machine learning, prizewinning novels cannot be distinguished from novels reviewed in the New York Times with a level of accuracy that exceeds random guessing). Or something that shows no discernibly interesting pattern from an interpretive point of view (we ran a topic modeling algorithm on all of ECCO and regardless of the parameters used the topics do not seem to represents reasonable categories of historical interest, i.e. it didn't work very well no matter what we did). These are examples of the kind of null results we're thinking of. I'm sure you can think of many, many more. It is important that the submission be as framed, justified and fleshed out as that positive result you've been salivating about publishing in the highest prestige place you can imagine. But just because the piece shows \"nothing\" (you know what I mean, don't get all postmodern on me), doesn't mean it shouldn't be published. If the question matters, then we ought to hear about how a method failed to address that question. This will not only save researchers time in knowing what to focus on, it can also open-up shared areas of inquiry—maybe there was a problem in the method that could be improved or maybe whatever you're looking for really doesn't have much of an effect. Only with repeated attempts can we ever get any confidence about spurious ideas or methodological limitations. Only then are we going to inhabit a research culture where everyone isn't always right. J O U R N A L O F C U L T U R A L A N A L Y T I C S 4 Fig. 1 The distribution on the top of the graph represents published results -overwhelmingly biased towards statistical significance (in blue, see the little dark blue part which buries the pink insignificant studies). The distribution on the right represents replicated results, which show a normal distribution that overwhelmingly favors insignificant results (pink). As commentators have increasingly pointed out, current models for statistical inference are mathematically biased towards over-estimating effects of real-world associations. From: Open Science Collaboration, \"Estimating the Reproducibility of Psychological Science,\" Science 349, aac4716 (2015). DOI: 10.1126/science.aac4716.","PeriodicalId":33005,"journal":{"name":"Journal of Cultural Analytics","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Send us your null results\",\"authors\":\"A. Piper\",\"doi\":\"10.22148/001c.11824\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the \\\"replication crisis.\\\" By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically \\\"significant\\\" results for publication; and 3) a tendency to not make data and code available for others to use. A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the \\\"replication crisis.\\\"1 By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically \\\"significant\\\" results for publication; and 3) a tendency to not make data and code available for others to use. What this means in more straightforward language is that researchers (and the public) overwhelmingly focus on \\\"positive\\\" results; they tend to over-estimate how strong their results are (how large a difference some variable or combination of variables makes); and they bury a considerable amount of decisions/judgments in their research process that have an impact on the outcomes. The graph in Figure 1 down below represents the first two dimensions of this problem in very succinct form (see Simmons et al for a discussion of the third).2 Why does this matter for Cultural Analytics? After all, much of the work in CA is insulated from problem #1 (low power) because of the often large sample sizes used. Even small effects are mostly going to be reproducible with large enough samples. Many will also rightly point out that a focus on significance testing is not always at the heart of interpretive research. Regardless of the number of texts used, researchers often take a more descriptive or exploratory approach to their documents, where the idea of \\\"null\\\" models makes less sense. And problem #3 is dealt with through a code and data repository that accompanies most articles (at least in CA and at least in most cases). J O U R N A L O F C U L T U R A L A N A L Y T I C S 2 But these caveats overlook a larger and more systemic problem that has to do with selection bias towards positive results. Whether you are doing significance testing or just saying you have found something \\\"interesting,\\\" the emphasis in publication is almost always on finding something \\\"positive.\\\" This is as much a part of the culture of academic publishing as it is the current moment in the shift towards data-driven approaches for studying culture. There is enormous pressure in the field to report something positive -that a method \\\"worked\\\" or \\\"shows\\\" something. One of the enduring critiques of new computational methods is that they \\\"don't show us anything we didn't already know.\\\" While many would disagree (rightly pointing to positive examples of new knowledge) or see this as a classic case of \\\"hindsight bias\\\" (our colleagues' ability to magically always be right), it is actually true that in most cases these methods don't show us anything at all. It's just that you don't hear about those cases. If we were to take the set of all experiments ever conducted with a computer on some texts, I would expect that in (at least) 95% of those cases the procedure yielded no insight of interest. In other words, positive results would be very rare. And yet, miraculously, all articles in CA report a positive result (mine included). To be fair, this is true of literally all literary and cultural studies. No one to my knowledge has ever published an article that said, I read a lot of books or watched a lot of television shows and it turns out my ideas about them weren't significant. But this too happens all the time. We just never hear about it. It's time to change that culture. Researchers in other fields have made a variety of suggestions to address this issue, including pre-submitting articles prior to completion so acceptance isn't biased towards positive results, to making the research process as open and transparent as possible.3 At CA, we want to start by encouraging submission of pieces that don't show positive results, however broadly defined. This can be another way that the journal CA, but also work in cultural analytics more broadly, can begin to change research culture in the humanities and cultural studies. It means not only changing the scale of our evidence considered or making our judgments more transparent and testable. It also means being more transparent about all the cases where our efforts yield no discernible effect or insight. As others have called for, it is time to embrace failure as an epistemic good.4 This may be CA's most radical gesture yet in changing the culture of research in the field of cultural studies. J O U R N A L O F C U L T U R A L A N A L Y T I C S 3 So let me open the floodgates here: we pledge to publish your null result. By null result, I mean either something that shows no statistical significance (i.e. using machine learning, prizewinning novels cannot be distinguished from novels reviewed in the New York Times with a level of accuracy that exceeds random guessing). Or something that shows no discernibly interesting pattern from an interpretive point of view (we ran a topic modeling algorithm on all of ECCO and regardless of the parameters used the topics do not seem to represents reasonable categories of historical interest, i.e. it didn't work very well no matter what we did). These are examples of the kind of null results we're thinking of. I'm sure you can think of many, many more. It is important that the submission be as framed, justified and fleshed out as that positive result you've been salivating about publishing in the highest prestige place you can imagine. But just because the piece shows \\\"nothing\\\" (you know what I mean, don't get all postmodern on me), doesn't mean it shouldn't be published. If the question matters, then we ought to hear about how a method failed to address that question. This will not only save researchers time in knowing what to focus on, it can also open-up shared areas of inquiry—maybe there was a problem in the method that could be improved or maybe whatever you're looking for really doesn't have much of an effect. Only with repeated attempts can we ever get any confidence about spurious ideas or methodological limitations. Only then are we going to inhabit a research culture where everyone isn't always right. J O U R N A L O F C U L T U R A L A N A L Y T I C S 4 Fig. 1 The distribution on the top of the graph represents published results -overwhelmingly biased towards statistical significance (in blue, see the little dark blue part which buries the pink insignificant studies). The distribution on the right represents replicated results, which show a normal distribution that overwhelmingly favors insignificant results (pink). As commentators have increasingly pointed out, current models for statistical inference are mathematically biased towards over-estimating effects of real-world associations. From: Open Science Collaboration, \\\"Estimating the Reproducibility of Psychological Science,\\\" Science 349, aac4716 (2015). DOI: 10.1126/science.aac4716.\",\"PeriodicalId\":33005,\"journal\":{\"name\":\"Journal of Cultural Analytics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Cultural Analytics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.22148/001c.11824\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Arts and Humanities\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Cultural Analytics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22148/001c.11824","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 0

摘要

在定量领域已经产生了相当数量的工作,以解决俗称的“复制危机”。这意味着三个相关现象:1)许多研究的低统计能力导致无法重现类似的效应大小;2)倾向于选择统计上“显著”的结果发表;3)不提供数据和代码供他人使用的倾向。在定量领域已经产生了相当数量的工作,以解决俗称的“复制危机”。这意味着三个相关的现象:1)许多研究的低统计能力导致无法重现类似的效应大小;2)倾向于选择统计上“显著”的结果发表;3)不提供数据和代码供他人使用的倾向。用更直接的语言来说,这意味着研究人员(和公众)压倒性地关注“积极”的结果;他们倾向于高估结果的强度(某些变量或变量组合产生的差异有多大);他们在研究过程中隐藏了相当多的决定/判断,这些决定/判断会影响结果。下面的图1以非常简洁的形式表示了这个问题的前两个维度(参见Simmons等人对第三个维度的讨论)为什么这对文化分析学很重要?毕竟,CA中的大部分工作都与问题#1(低功耗)绝缘,因为通常使用的样本量很大。即使是很小的影响,在足够大的样本中也大多是可重复的。许多人还会正确地指出,对显著性检验的关注并不总是解释性研究的核心。无论使用的文本数量如何,研究人员通常对他们的文档采取更具描述性或探索性的方法,在这种情况下,“零”模型的概念意义不大。问题#3是通过大多数文章附带的代码和数据存储库来处理的(至少在CA中,至少在大多数情况下)。但是,这些警告忽略了一个更大、更系统性的问题,这个问题与对积极结果的选择偏见有关。无论你是在做显著性检验,还是只是说你发现了一些“有趣的”东西,出版物的重点几乎总是在发现一些“积极的”东西。这既是学术出版文化的一部分,也是当前研究文化向数据驱动方法转变的一部分。在这个领域有巨大的压力要报告一些积极的东西——一种方法“有效”或“显示”了一些东西。对新计算方法的持久批评之一是,它们“没有向我们展示任何我们不知道的东西”。虽然许多人不同意(正确地指出新知识的积极例子),或者认为这是一个典型的“后见之明偏见”(我们的同事总是神奇地正确的能力),但事实上,在大多数情况下,这些方法根本没有向我们展示任何东西。只是你没听说过那些案子。如果我们用计算机对一些文本进行所有实验,我预计(至少)95%的情况下,这个过程没有产生任何感兴趣的见解。换句话说,积极的结果将非常罕见。然而,奇迹般地,所有的文章在CA报告一个积极的结果(包括我的)。公平地说,所有的文学和文化研究都是如此。据我所知,从来没有人发表过这样的文章:我读了很多书或看了很多电视节目,结果发现我对它们的想法并不重要。但这种情况也经常发生。我们只是从来没听说过。是时候改变这种文化了。其他领域的研究人员提出了各种各样的建议来解决这个问题,包括在完成之前预先提交文章,这样接受就不会偏向于积极的结果,使研究过程尽可能开放和透明在CA,我们希望首先鼓励提交那些没有显示出积极结果的作品,无论其定义多么宽泛。这可以是CA杂志的另一种方式,也可以在更广泛的文化分析领域工作,可以开始改变人文科学和文化研究的研究文化。这不仅意味着改变我们所考虑的证据的规模,或者使我们的判断更加透明和可测试。这也意味着在我们的努力没有产生明显效果或洞察力的所有情况下,要更加透明。正如其他人所呼吁的那样,是时候把失败当作一种认知上的好处来接受了这可能是CA在文化研究领域改变研究文化的最激进的姿态。所以让我在这里打开闸门:我们保证公布你的无效结果。 我所说的无效结果是指没有统计意义的东西(例如,使用机器学习,获奖小说无法与纽约时报评论的小说区分开来,其准确性超过随机猜测)。或者从解释的角度来看,没有显示出明显有趣的模式(我们在所有ECCO上运行主题建模算法,无论使用什么参数,主题似乎都不能代表历史兴趣的合理类别,即无论我们做什么,它都不能很好地工作)。这些是我们考虑的零结果的例子。我相信你还能想到很多很多。重要的是,提交的内容要有框架、合理和充实,就像你一直垂涎的那样,在你能想象到的最高声望的地方发表。但仅仅因为这篇文章“什么都没有”(你知道我的意思,别把我说成后现代主义),并不意味着它不应该发表。如果这个问题很重要,那么我们应该听听一个方法是如何无法解决这个问题的。这不仅可以节省研究人员了解重点的时间,还可以打开共享的研究领域——也许方法中存在可以改进的问题,或者你正在寻找的东西真的没有太大的效果。只有经过反复的尝试,我们才能对错误的想法或方法的局限性有信心。只有这样,我们才能生活在一个并非人人都是对的研究文化中。图1图表顶部的分布代表了已发表的结果-绝大多数偏向于统计显著性(蓝色,见深蓝色的小部分,它掩盖了粉红色的不重要的研究)。右边的分布代表了重复的结果,它显示了一个正态分布,绝大多数倾向于不重要的结果(粉红色)。正如评论家们越来越多地指出的那样,目前的统计推断模型在数学上倾向于高估现实世界关联的影响。摘自:开放科学协作,“心理科学的可再现性评估”,《科学》349,aac4716(2015)。DOI: 10.1126 / science.aac4716。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Send us your null results
A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the "replication crisis." By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically "significant" results for publication; and 3) a tendency to not make data and code available for others to use. A considerable amount of work has been produced in quantitative fields addressing what has colloquially been called the "replication crisis."1 By this is meant three related phenomena: 1) the low statistical power of many studies resulting in an inability to reproduce a similar effect size; 2) a bias towards selecting statistically "significant" results for publication; and 3) a tendency to not make data and code available for others to use. What this means in more straightforward language is that researchers (and the public) overwhelmingly focus on "positive" results; they tend to over-estimate how strong their results are (how large a difference some variable or combination of variables makes); and they bury a considerable amount of decisions/judgments in their research process that have an impact on the outcomes. The graph in Figure 1 down below represents the first two dimensions of this problem in very succinct form (see Simmons et al for a discussion of the third).2 Why does this matter for Cultural Analytics? After all, much of the work in CA is insulated from problem #1 (low power) because of the often large sample sizes used. Even small effects are mostly going to be reproducible with large enough samples. Many will also rightly point out that a focus on significance testing is not always at the heart of interpretive research. Regardless of the number of texts used, researchers often take a more descriptive or exploratory approach to their documents, where the idea of "null" models makes less sense. And problem #3 is dealt with through a code and data repository that accompanies most articles (at least in CA and at least in most cases). J O U R N A L O F C U L T U R A L A N A L Y T I C S 2 But these caveats overlook a larger and more systemic problem that has to do with selection bias towards positive results. Whether you are doing significance testing or just saying you have found something "interesting," the emphasis in publication is almost always on finding something "positive." This is as much a part of the culture of academic publishing as it is the current moment in the shift towards data-driven approaches for studying culture. There is enormous pressure in the field to report something positive -that a method "worked" or "shows" something. One of the enduring critiques of new computational methods is that they "don't show us anything we didn't already know." While many would disagree (rightly pointing to positive examples of new knowledge) or see this as a classic case of "hindsight bias" (our colleagues' ability to magically always be right), it is actually true that in most cases these methods don't show us anything at all. It's just that you don't hear about those cases. If we were to take the set of all experiments ever conducted with a computer on some texts, I would expect that in (at least) 95% of those cases the procedure yielded no insight of interest. In other words, positive results would be very rare. And yet, miraculously, all articles in CA report a positive result (mine included). To be fair, this is true of literally all literary and cultural studies. No one to my knowledge has ever published an article that said, I read a lot of books or watched a lot of television shows and it turns out my ideas about them weren't significant. But this too happens all the time. We just never hear about it. It's time to change that culture. Researchers in other fields have made a variety of suggestions to address this issue, including pre-submitting articles prior to completion so acceptance isn't biased towards positive results, to making the research process as open and transparent as possible.3 At CA, we want to start by encouraging submission of pieces that don't show positive results, however broadly defined. This can be another way that the journal CA, but also work in cultural analytics more broadly, can begin to change research culture in the humanities and cultural studies. It means not only changing the scale of our evidence considered or making our judgments more transparent and testable. It also means being more transparent about all the cases where our efforts yield no discernible effect or insight. As others have called for, it is time to embrace failure as an epistemic good.4 This may be CA's most radical gesture yet in changing the culture of research in the field of cultural studies. J O U R N A L O F C U L T U R A L A N A L Y T I C S 3 So let me open the floodgates here: we pledge to publish your null result. By null result, I mean either something that shows no statistical significance (i.e. using machine learning, prizewinning novels cannot be distinguished from novels reviewed in the New York Times with a level of accuracy that exceeds random guessing). Or something that shows no discernibly interesting pattern from an interpretive point of view (we ran a topic modeling algorithm on all of ECCO and regardless of the parameters used the topics do not seem to represents reasonable categories of historical interest, i.e. it didn't work very well no matter what we did). These are examples of the kind of null results we're thinking of. I'm sure you can think of many, many more. It is important that the submission be as framed, justified and fleshed out as that positive result you've been salivating about publishing in the highest prestige place you can imagine. But just because the piece shows "nothing" (you know what I mean, don't get all postmodern on me), doesn't mean it shouldn't be published. If the question matters, then we ought to hear about how a method failed to address that question. This will not only save researchers time in knowing what to focus on, it can also open-up shared areas of inquiry—maybe there was a problem in the method that could be improved or maybe whatever you're looking for really doesn't have much of an effect. Only with repeated attempts can we ever get any confidence about spurious ideas or methodological limitations. Only then are we going to inhabit a research culture where everyone isn't always right. J O U R N A L O F C U L T U R A L A N A L Y T I C S 4 Fig. 1 The distribution on the top of the graph represents published results -overwhelmingly biased towards statistical significance (in blue, see the little dark blue part which buries the pink insignificant studies). The distribution on the right represents replicated results, which show a normal distribution that overwhelmingly favors insignificant results (pink). As commentators have increasingly pointed out, current models for statistical inference are mathematically biased towards over-estimating effects of real-world associations. From: Open Science Collaboration, "Estimating the Reproducibility of Psychological Science," Science 349, aac4716 (2015). DOI: 10.1126/science.aac4716.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Cultural Analytics
Journal of Cultural Analytics Arts and Humanities-Literature and Literary Theory
CiteScore
2.90
自引率
0.00%
发文量
9
审稿时长
10 weeks
期刊最新文献
Soviet View of the World. Exploring Long-Term Visual Patterns in “Novosti dnia” Newsreel Journal (1945-1992) A Digital Archaeology of Early Hispanic Film Culture: Film Magazines and the Male Fan Reader A Digital Trail of Rupture. The German Film Exile 1933-1945 in the Data of Günter Peter Straschek Approaching a National Film History through Data. Network Analysis in German Film History Digital Film Historiography: Challenges of/and Interdisciplinarity
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1