了解引文

IF 8.2 1区 化学 Q1 CHEMISTRY, ANALYTICAL ACS Sensors Pub Date : 2024-11-22 DOI:10.1021/acssensors.4c03076
Andrew J. deMello
{"title":"了解引文","authors":"Andrew J. deMello","doi":"10.1021/acssensors.4c03076","DOIUrl":null,"url":null,"abstract":"This month, I would like to share a few personal thoughts about bibliometric indicators and specifically citations. As any scientist, publisher or journal editor will likely admit, the number of downloads, reads or citations associated with a journal publication are, for better or worse, ubiquitous metrics in modern-day scientific publishing. But what does a citation tell us? If an author cites a publication, they are simply making a declaration that a piece of work has relevance to their activities/interests and is worthy of comment. A citation makes no judgment on the “quality” of the cited work, but rather informs the reader that the prior study is worth inspection. That said, and to many, the number of citations <i>does</i> provide a measure of the relative “importance” or “impact” of an article to the wider community. My intention here is not to settle that argument, although I would say that broad-brush citation counting clearly fails to assess impact at the article level, ignoring the influence of the research field or time of publication, and that more nuanced metrics, such the <i>relative citation ratio</i>, (1) are far more instructive. Rather, I would like to recount an incident in my own research group. In the course of his studies, one of my graduate students realized that he needed an optical sensor for Pd<sup>2+</sup> quantification. The sensor needed to be accessible, simple to implement, provide for good analytical sensitivities and detection limits and work in aqueous media. He performed a literature search and soon came across a number of optical sensors that on paper looked promising. One of these looked especially interesting, since it was based on measuring the fluorescence of a readily available coumarin laser dye. The authors claimed that their “turn-off” sensor was cheap, provided excellent (nM) detection limits, could sense Pd<sup>2+</sup> in aqueous environments and could detect Pd<sup>2+</sup> in live cells. The study had been published in a well-respected journal specializing in photophysical and photochemical research and had garnered over 20 citations within the four years since publication. All looked fine, so we decided to adopt the sensor and use it for the problem in hand. After a few weeks of testing and experimentation, we realized that the sensor might not be as useful as we had been led to believe. Through systematic reproduction of the experimental procedures reported in the original paper and a number of additional experiments, we came to the (correct) conclusion that the coumarin derivative was in fact not a fluorescence sensor for Pd<sup>2+</sup> but was rather an extremely poor pH sensor able to operate over a restricted range of 1.5 pH units. This was clearly disappointing, but scientific research is rarely straightforward, and setbacks of this kind are not uncommon. What was far more worrisome was the fact that a number of the experimental procedures reported in the original paper were inaccurately or incompletely presented. This hindered our assessment of the sensor and meant that much effort was required to pinpoint earlier mistakes. This personal anecdote, rather than being an opportunistic diatribe, is intended to highlight the importance of providing an accurate and complete description of experimental methods used to generate the data presented in a scientific publication and the consequences of publishing inaccurate or erroneous findings. Fortunately for us, we developed an alternative Pd<sup>2+</sup> sensor and additionally reported our “re-evaluation” of original work in the same peer-reviewed journal. However, this made me think more deeply about how we use the literature to inform and underpin contemporary science. The most obvious problem faced by all researchers, whatever their field of expertise, is the sheer number of peer-reviewed papers published each year. To give you some idea of the problem, over 2.8 million new papers were published and indexed by the Scopus and Web of Science databases in 2022: a number 47% higher than in 2016. (2) Even the most dedicated researcher would only be able to read a miniscule fraction of all papers relevant to their interests, so how should one prioritize and select which papers should be looked at and which should not? There is obviously no correct answer to this question, but for many, the strategy of choice will involve the use of scientific abstract and citation databases, such as <i>Web of Science</i>, <i>Scopus</i>, <i>PubMed</i>, <i>SciFinder</i> and <i>The Lens</i>, to find publications relevant to their area of interest. A citation index or database is simply an ordered register of cited articles along with a register of citing articles. Its utility lies in its ability to connect or associate scientific concepts and ideas. Put simply, if an author cites a previously published piece of work in their own paper, they have created an unambiguous link between their science and the prior work. Science citation indexing in its modern form was introduced by Eugene Garfield in the 1950s, with the primary goal of simplifying information retrieval, rather than identifying “important” or “impactful” publications. (3) Interestingly, a stated driver of his original science citation index was also to “<i>eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers</i>”. Indeed, Garfield opines that “<i>even if there were no other use for the citation index than that of minimizing the citation of poor data, the index would be well worth the effort</i>”. This particular comment takes me back to my “palladium problem”. Perhaps, if I had looked more closely at the articles that cited the original paper, I would have uncovered concerns regarding the method and its sensing utility? So, having a spare hour, I did exactly this. Of course, this is one paper from many millions, but the results were instructive to me at least. In broad terms, almost all citations (to the original paper) appeared in the introductory section and simply stated that a Pd<sup>2+</sup> sensor based on a coumarin dye had been reported. 80% made no comment on the quality (in terms of performance metrics) or utility of the work, 15% were self-citations by the authors, with only one paper providing comment on an aspect of the original data. Based on this analysis, I do not think that we can be too hard on ourselves for believing that the Pd<sup>2+</sup> sensor would be fit for purpose. Nonetheless, how could we have leveraged the tools and features of modern electronic publishing to make a better analysis? One possible strategy could be to discriminate between citations based on their origin. For example, references in review articles may often have been cited without any meaningful analysis of the veracity of the work, while references cited in the results section of a research article are more likely to have been scrutinized by the authors in relation to their own work, whether the citation highlights a “good” or “bad” issue. Providing the reader with such information would clearly impart extra contrast to the citation metric and aid in their ability to identify articles “important” to their work. Fortunately, the advent of AI is beginning to make valuable contributions in this regard and a number of “smart citation” tools are being introduced. For example, citation analysis platforms such as Scite (4) leverage AI to better understand and utilize scientific citations. Rather than simply reporting the occurrence of a citation, citations can be classified by their contextual usage, for example, through the number of supporting, contrasting, and mentioning citation statements. This allows researchers to evaluate the utility and importance of a reference and ultimately enhance the scientific method. This would be especially useful in our field of sensor science, where knowledge of the sensors or sensing methods that have been successfully used in given scenarios would be invaluable when identifying the need to improve or develop new sensors. It will be some time before “smart citation metrics” are widely adopted by the scientific community. However, it is clear that all citations are not equal, and that we should be smarter in both the way we cite literature and the way we use literature citations. This article references 4 other publications. This article has not yet been cited by other publications.","PeriodicalId":24,"journal":{"name":"ACS Sensors","volume":"11 1","pages":""},"PeriodicalIF":8.2000,"publicationDate":"2024-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Making Sense of Citations\",\"authors\":\"Andrew J. deMello\",\"doi\":\"10.1021/acssensors.4c03076\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This month, I would like to share a few personal thoughts about bibliometric indicators and specifically citations. As any scientist, publisher or journal editor will likely admit, the number of downloads, reads or citations associated with a journal publication are, for better or worse, ubiquitous metrics in modern-day scientific publishing. But what does a citation tell us? If an author cites a publication, they are simply making a declaration that a piece of work has relevance to their activities/interests and is worthy of comment. A citation makes no judgment on the “quality” of the cited work, but rather informs the reader that the prior study is worth inspection. That said, and to many, the number of citations <i>does</i> provide a measure of the relative “importance” or “impact” of an article to the wider community. My intention here is not to settle that argument, although I would say that broad-brush citation counting clearly fails to assess impact at the article level, ignoring the influence of the research field or time of publication, and that more nuanced metrics, such the <i>relative citation ratio</i>, (1) are far more instructive. Rather, I would like to recount an incident in my own research group. In the course of his studies, one of my graduate students realized that he needed an optical sensor for Pd<sup>2+</sup> quantification. The sensor needed to be accessible, simple to implement, provide for good analytical sensitivities and detection limits and work in aqueous media. He performed a literature search and soon came across a number of optical sensors that on paper looked promising. One of these looked especially interesting, since it was based on measuring the fluorescence of a readily available coumarin laser dye. The authors claimed that their “turn-off” sensor was cheap, provided excellent (nM) detection limits, could sense Pd<sup>2+</sup> in aqueous environments and could detect Pd<sup>2+</sup> in live cells. The study had been published in a well-respected journal specializing in photophysical and photochemical research and had garnered over 20 citations within the four years since publication. All looked fine, so we decided to adopt the sensor and use it for the problem in hand. After a few weeks of testing and experimentation, we realized that the sensor might not be as useful as we had been led to believe. Through systematic reproduction of the experimental procedures reported in the original paper and a number of additional experiments, we came to the (correct) conclusion that the coumarin derivative was in fact not a fluorescence sensor for Pd<sup>2+</sup> but was rather an extremely poor pH sensor able to operate over a restricted range of 1.5 pH units. This was clearly disappointing, but scientific research is rarely straightforward, and setbacks of this kind are not uncommon. What was far more worrisome was the fact that a number of the experimental procedures reported in the original paper were inaccurately or incompletely presented. This hindered our assessment of the sensor and meant that much effort was required to pinpoint earlier mistakes. This personal anecdote, rather than being an opportunistic diatribe, is intended to highlight the importance of providing an accurate and complete description of experimental methods used to generate the data presented in a scientific publication and the consequences of publishing inaccurate or erroneous findings. Fortunately for us, we developed an alternative Pd<sup>2+</sup> sensor and additionally reported our “re-evaluation” of original work in the same peer-reviewed journal. However, this made me think more deeply about how we use the literature to inform and underpin contemporary science. The most obvious problem faced by all researchers, whatever their field of expertise, is the sheer number of peer-reviewed papers published each year. To give you some idea of the problem, over 2.8 million new papers were published and indexed by the Scopus and Web of Science databases in 2022: a number 47% higher than in 2016. (2) Even the most dedicated researcher would only be able to read a miniscule fraction of all papers relevant to their interests, so how should one prioritize and select which papers should be looked at and which should not? There is obviously no correct answer to this question, but for many, the strategy of choice will involve the use of scientific abstract and citation databases, such as <i>Web of Science</i>, <i>Scopus</i>, <i>PubMed</i>, <i>SciFinder</i> and <i>The Lens</i>, to find publications relevant to their area of interest. A citation index or database is simply an ordered register of cited articles along with a register of citing articles. Its utility lies in its ability to connect or associate scientific concepts and ideas. Put simply, if an author cites a previously published piece of work in their own paper, they have created an unambiguous link between their science and the prior work. Science citation indexing in its modern form was introduced by Eugene Garfield in the 1950s, with the primary goal of simplifying information retrieval, rather than identifying “important” or “impactful” publications. (3) Interestingly, a stated driver of his original science citation index was also to “<i>eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers</i>”. Indeed, Garfield opines that “<i>even if there were no other use for the citation index than that of minimizing the citation of poor data, the index would be well worth the effort</i>”. This particular comment takes me back to my “palladium problem”. Perhaps, if I had looked more closely at the articles that cited the original paper, I would have uncovered concerns regarding the method and its sensing utility? So, having a spare hour, I did exactly this. Of course, this is one paper from many millions, but the results were instructive to me at least. In broad terms, almost all citations (to the original paper) appeared in the introductory section and simply stated that a Pd<sup>2+</sup> sensor based on a coumarin dye had been reported. 80% made no comment on the quality (in terms of performance metrics) or utility of the work, 15% were self-citations by the authors, with only one paper providing comment on an aspect of the original data. Based on this analysis, I do not think that we can be too hard on ourselves for believing that the Pd<sup>2+</sup> sensor would be fit for purpose. Nonetheless, how could we have leveraged the tools and features of modern electronic publishing to make a better analysis? One possible strategy could be to discriminate between citations based on their origin. For example, references in review articles may often have been cited without any meaningful analysis of the veracity of the work, while references cited in the results section of a research article are more likely to have been scrutinized by the authors in relation to their own work, whether the citation highlights a “good” or “bad” issue. Providing the reader with such information would clearly impart extra contrast to the citation metric and aid in their ability to identify articles “important” to their work. Fortunately, the advent of AI is beginning to make valuable contributions in this regard and a number of “smart citation” tools are being introduced. For example, citation analysis platforms such as Scite (4) leverage AI to better understand and utilize scientific citations. Rather than simply reporting the occurrence of a citation, citations can be classified by their contextual usage, for example, through the number of supporting, contrasting, and mentioning citation statements. This allows researchers to evaluate the utility and importance of a reference and ultimately enhance the scientific method. This would be especially useful in our field of sensor science, where knowledge of the sensors or sensing methods that have been successfully used in given scenarios would be invaluable when identifying the need to improve or develop new sensors. It will be some time before “smart citation metrics” are widely adopted by the scientific community. However, it is clear that all citations are not equal, and that we should be smarter in both the way we cite literature and the way we use literature citations. This article references 4 other publications. This article has not yet been cited by other publications.\",\"PeriodicalId\":24,\"journal\":{\"name\":\"ACS Sensors\",\"volume\":\"11 1\",\"pages\":\"\"},\"PeriodicalIF\":8.2000,\"publicationDate\":\"2024-11-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ACS Sensors\",\"FirstCategoryId\":\"92\",\"ListUrlMain\":\"https://doi.org/10.1021/acssensors.4c03076\",\"RegionNum\":1,\"RegionCategory\":\"化学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"CHEMISTRY, ANALYTICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACS Sensors","FirstCategoryId":"92","ListUrlMain":"https://doi.org/10.1021/acssensors.4c03076","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, ANALYTICAL","Score":null,"Total":0}
引用次数: 0

摘要

现代形式的科学引文索引由尤金-加菲尔德(Eugene Garfield)于 20 世纪 50 年代引入,其主要目标是简化信息检索,而不是识别 "重要 "或 "有影响 "的出版物。(3) 有趣的是,他最初的科学引文索引的一个既定驱动力也是 "通过让有良知的学者了解对早期论文的批评意见,消除不加批判地引用虚假、不完整或过时数据的现象"。事实上,加菲尔德认为,"即使引文索引除了最大限度地减少对拙劣数据的引用之外没有其他用途,该索引也是非常值得的"。这一评论让我想起了我的 "钯金问题"。也许,如果我更仔细地研究一下引用原始论文的文章,就会发现人们对该方法及其传感效用的担忧?于是,我利用空闲时间做了这件事。当然,这只是数百万篇论文中的一篇,但结果至少对我很有启发。从广义上讲,几乎所有的引文(原始论文)都出现在引言部分,只是简单地说明已经报道了一种基于香豆素染料的 Pd2+ 传感器。80%的引用没有对工作的质量(性能指标)或实用性进行评论,15%是作者的自我引用,只有一篇论文对原始数据的某个方面进行了评论。根据上述分析,我认为我们不能因为相信 Pd2+ 传感器适合用途而对自己过于苛刻。尽管如此,我们如何才能利用现代电子出版的工具和功能做出更好的分析呢?一种可能的策略是根据引文的来源对其进行区分。例如,综述文章中的参考文献可能经常是在没有对工作的真实性进行任何有意义的分析的情况下被引用的,而在研究文章的结果部分引用的参考文献则更有可能是作者结合自己的工作仔细研究过的,无论引用的内容是突出了 "好 "还是 "坏 "的问题。向读者提供此类信息显然会给引文指标带来额外的对比度,有助于读者识别对其工作 "重要 "的文章。幸运的是,人工智能的出现开始在这方面做出有价值的贡献,许多 "智能引文 "工具正在被引入。例如,Scite(4)等引文分析平台利用人工智能来更好地理解和利用科学引文。引文不是简单地报告引文的出现情况,而是可以根据其上下文使用情况进行分类,例如,通过支持、对比和提及引文陈述的数量进行分类。这样,研究人员就可以评估参考文献的实用性和重要性,最终提升科学方法的水平。这对我们的传感器科学领域尤其有用,因为在确定是否需要改进或开发新的传感器时,了解已在特定场景中成功使用过的传感器或传感方法将非常有价值。"智能引文度量 "被科学界广泛采用尚需时日。不过,很显然,所有的引文都不尽相同,我们应该在引用文献的方式和使用文献引文的方式上更加智能。本文引用了 4 篇其他出版物。本文尚未被其他出版物引用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Making Sense of Citations
This month, I would like to share a few personal thoughts about bibliometric indicators and specifically citations. As any scientist, publisher or journal editor will likely admit, the number of downloads, reads or citations associated with a journal publication are, for better or worse, ubiquitous metrics in modern-day scientific publishing. But what does a citation tell us? If an author cites a publication, they are simply making a declaration that a piece of work has relevance to their activities/interests and is worthy of comment. A citation makes no judgment on the “quality” of the cited work, but rather informs the reader that the prior study is worth inspection. That said, and to many, the number of citations does provide a measure of the relative “importance” or “impact” of an article to the wider community. My intention here is not to settle that argument, although I would say that broad-brush citation counting clearly fails to assess impact at the article level, ignoring the influence of the research field or time of publication, and that more nuanced metrics, such the relative citation ratio, (1) are far more instructive. Rather, I would like to recount an incident in my own research group. In the course of his studies, one of my graduate students realized that he needed an optical sensor for Pd2+ quantification. The sensor needed to be accessible, simple to implement, provide for good analytical sensitivities and detection limits and work in aqueous media. He performed a literature search and soon came across a number of optical sensors that on paper looked promising. One of these looked especially interesting, since it was based on measuring the fluorescence of a readily available coumarin laser dye. The authors claimed that their “turn-off” sensor was cheap, provided excellent (nM) detection limits, could sense Pd2+ in aqueous environments and could detect Pd2+ in live cells. The study had been published in a well-respected journal specializing in photophysical and photochemical research and had garnered over 20 citations within the four years since publication. All looked fine, so we decided to adopt the sensor and use it for the problem in hand. After a few weeks of testing and experimentation, we realized that the sensor might not be as useful as we had been led to believe. Through systematic reproduction of the experimental procedures reported in the original paper and a number of additional experiments, we came to the (correct) conclusion that the coumarin derivative was in fact not a fluorescence sensor for Pd2+ but was rather an extremely poor pH sensor able to operate over a restricted range of 1.5 pH units. This was clearly disappointing, but scientific research is rarely straightforward, and setbacks of this kind are not uncommon. What was far more worrisome was the fact that a number of the experimental procedures reported in the original paper were inaccurately or incompletely presented. This hindered our assessment of the sensor and meant that much effort was required to pinpoint earlier mistakes. This personal anecdote, rather than being an opportunistic diatribe, is intended to highlight the importance of providing an accurate and complete description of experimental methods used to generate the data presented in a scientific publication and the consequences of publishing inaccurate or erroneous findings. Fortunately for us, we developed an alternative Pd2+ sensor and additionally reported our “re-evaluation” of original work in the same peer-reviewed journal. However, this made me think more deeply about how we use the literature to inform and underpin contemporary science. The most obvious problem faced by all researchers, whatever their field of expertise, is the sheer number of peer-reviewed papers published each year. To give you some idea of the problem, over 2.8 million new papers were published and indexed by the Scopus and Web of Science databases in 2022: a number 47% higher than in 2016. (2) Even the most dedicated researcher would only be able to read a miniscule fraction of all papers relevant to their interests, so how should one prioritize and select which papers should be looked at and which should not? There is obviously no correct answer to this question, but for many, the strategy of choice will involve the use of scientific abstract and citation databases, such as Web of Science, Scopus, PubMed, SciFinder and The Lens, to find publications relevant to their area of interest. A citation index or database is simply an ordered register of cited articles along with a register of citing articles. Its utility lies in its ability to connect or associate scientific concepts and ideas. Put simply, if an author cites a previously published piece of work in their own paper, they have created an unambiguous link between their science and the prior work. Science citation indexing in its modern form was introduced by Eugene Garfield in the 1950s, with the primary goal of simplifying information retrieval, rather than identifying “important” or “impactful” publications. (3) Interestingly, a stated driver of his original science citation index was also to “eliminate the uncritical citation of fraudulent, incomplete, or obsolete data by making it possible for the conscientious scholar to be aware of criticisms of earlier papers”. Indeed, Garfield opines that “even if there were no other use for the citation index than that of minimizing the citation of poor data, the index would be well worth the effort”. This particular comment takes me back to my “palladium problem”. Perhaps, if I had looked more closely at the articles that cited the original paper, I would have uncovered concerns regarding the method and its sensing utility? So, having a spare hour, I did exactly this. Of course, this is one paper from many millions, but the results were instructive to me at least. In broad terms, almost all citations (to the original paper) appeared in the introductory section and simply stated that a Pd2+ sensor based on a coumarin dye had been reported. 80% made no comment on the quality (in terms of performance metrics) or utility of the work, 15% were self-citations by the authors, with only one paper providing comment on an aspect of the original data. Based on this analysis, I do not think that we can be too hard on ourselves for believing that the Pd2+ sensor would be fit for purpose. Nonetheless, how could we have leveraged the tools and features of modern electronic publishing to make a better analysis? One possible strategy could be to discriminate between citations based on their origin. For example, references in review articles may often have been cited without any meaningful analysis of the veracity of the work, while references cited in the results section of a research article are more likely to have been scrutinized by the authors in relation to their own work, whether the citation highlights a “good” or “bad” issue. Providing the reader with such information would clearly impart extra contrast to the citation metric and aid in their ability to identify articles “important” to their work. Fortunately, the advent of AI is beginning to make valuable contributions in this regard and a number of “smart citation” tools are being introduced. For example, citation analysis platforms such as Scite (4) leverage AI to better understand and utilize scientific citations. Rather than simply reporting the occurrence of a citation, citations can be classified by their contextual usage, for example, through the number of supporting, contrasting, and mentioning citation statements. This allows researchers to evaluate the utility and importance of a reference and ultimately enhance the scientific method. This would be especially useful in our field of sensor science, where knowledge of the sensors or sensing methods that have been successfully used in given scenarios would be invaluable when identifying the need to improve or develop new sensors. It will be some time before “smart citation metrics” are widely adopted by the scientific community. However, it is clear that all citations are not equal, and that we should be smarter in both the way we cite literature and the way we use literature citations. This article references 4 other publications. This article has not yet been cited by other publications.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
ACS Sensors
ACS Sensors Chemical Engineering-Bioengineering
CiteScore
14.50
自引率
3.40%
发文量
372
期刊介绍: ACS Sensors is a peer-reviewed research journal that focuses on the dissemination of new and original knowledge in the field of sensor science, particularly those that selectively sense chemical or biological species or processes. The journal covers a broad range of topics, including but not limited to biosensors, chemical sensors, gas sensors, intracellular sensors, single molecule sensors, cell chips, and microfluidic devices. It aims to publish articles that address conceptual advances in sensing technology applicable to various types of analytes or application papers that report on the use of existing sensing concepts in new ways or for new analytes.
期刊最新文献
Metal–Organic Frameworks (MOFs)-Based Piezoelectric-Colorimetric Hybrid Sensor for Monitoring Green Leaf Volatiles Fluorescent Chemosensors in the Creation of a Commercially Available Continuous Glucose Monitor Present and Future of Emerging Catalysts in Gas Sensors for Breath Analysis Complexation Behavior and Clinical Assessment of Isomeric Calcium Ionophores of ETH 1001 in Polymeric Ion-Selective Membranes Issue Publication Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1