首页 > 最新文献

Ideas in Ecology and Evolution最新文献

英文 中文
Some advice to early career scientists: Personal perspectives on surviving in a complex world 给早期职业科学家的一些建议:在复杂世界中生存的个人观点
IF 0.2 Pub Date : 2016-07-29 DOI: 10.4033/IEE.2016.9.5.E
J. Smol
I began writing this paper with some trepidation as I can imagine many readers asking: What makes you an expert on providing advice to young scientists? I do not claim to have any special expertise in this area, other than the practical experience I have gained from mentoring a large number of young scientists over the last three decades. My ideas on this topic have been refined over the past few years when, after being awarded several teaching and mentoring awards, I began receiving invitations to provide talks and workshops focused on mentoring young scientists. To date, I have provided presentations on this topic on five continents, indicating broad interests in these issues. The impetus for this commentary was reinforced further when the Editor of Ideas in Ecology and Evolution recently listened to one of my presentations and invited me to provide this perspective. The 13 points of advice outlined below summarize some of the main topics I have attempted to develop in my various workshops and presentations to young scientists. These points have evolved over time, and were modified following discussions with students and mentors. I certainly do not claim that any of them are highly original, but they represent what I believe to be practical suggestions and points for discussion.
我开始写这篇论文时有些不安,因为我可以想象许多读者会问:是什么让你成为给年轻科学家提供建议的专家?我并不是说我在这个领域有什么特别的专业知识,除了我在过去三十年中指导大量年轻科学家所获得的实践经验。在过去的几年里,我对这个话题的想法得到了完善,在获得了几个教学和指导奖之后,我开始收到邀请,提供以指导年轻科学家为重点的讲座和研讨会。迄今为止,我已在五大洲就这一主题作过演讲,表明人们对这些问题有广泛的兴趣。当《生态与进化思想》的编辑最近听了我的一次演讲并邀请我提供这一观点时,这篇评论的动力得到了进一步加强。下面列出的13点建议总结了我在各种研讨会和对年轻科学家的演讲中试图提出的一些主要主题。这些观点随着时间的推移而演变,并在与学生和导师的讨论后进行了修改。我当然不认为其中任何一个都是高度原创的,但它们代表了我认为是实用的建议和讨论点。
{"title":"Some advice to early career scientists: Personal perspectives on surviving in a complex world","authors":"J. Smol","doi":"10.4033/IEE.2016.9.5.E","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.5.E","url":null,"abstract":"I began writing this paper with some trepidation as I can imagine many readers asking: What makes you an expert on providing advice to young scientists? I do not claim to have any special expertise in this area, other than the practical experience I have gained from mentoring a large number of young scientists over the last three decades. My ideas on this topic have been refined over the past few years when, after being awarded several teaching and mentoring awards, I began receiving invitations to provide talks and workshops focused on mentoring young scientists. To date, I have provided presentations on this topic on five continents, indicating broad interests in these issues. The impetus for this commentary was reinforced further when the Editor of Ideas in Ecology and Evolution recently listened to one of my presentations and invited me to provide this perspective. The 13 points of advice outlined below summarize some of the main topics I have attempted to develop in my various workshops and presentations to young scientists. These points have evolved over time, and were modified following discussions with students and mentors. I certainly do not claim that any of them are highly original, but they represent what I believe to be practical suggestions and points for discussion.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Altruism in wolves explains the coevolution of dogs and humans 狼的利他主义解释了狗和人类的共同进化
IF 0.2 Pub Date : 2016-06-13 DOI: 10.4033/iee.2016.9.2.n
P. Jouventin, Y. Christen, F. Dobson
The date of historical domestication of dogs has been pushed back to between 15,000–30,000 years ago (estimates vary), a time when hunter-gatherer societies predominated in northern Europe and central Asia. We present insights from evolutionary behavioural ecology suggesting that wolves may have been “tricked” by their social evolution into contributing to the success of prehistoric human families or tribes. Four different wolves (one observed in great detail, as reported in recent book) that were raised by human families exhibited cooperative behaviours that protected their human “pack members.” Such hereditary altruistic behaviours may have been transferred by descent to the first dogs, which helped our ancestors hunt large animals and fight against other human tribes and wild carnivores. We hypothesize that the first need in domestication was for less aggressive wolf behaviour, within the wolf and human coevolution of the cooperative family or tribe that used wolves to increase their competitive fitness advantages.
狗在历史上被驯化的时间可以追溯到1.5万到3万年前(估计有所不同),当时狩猎采集社会在北欧和中亚占主导地位。我们提出了进化行为生态学的见解,表明狼可能被它们的社会进化“欺骗”,为史前人类家庭或部落的成功做出了贡献。由人类家庭饲养的四只不同的狼(其中一只被详细观察,在最近的一本书中有报道)表现出合作行为,以保护它们的人类“群体成员”。这种遗传的利他行为可能通过祖先传给了第一批狗,它们帮助我们的祖先捕猎大型动物,并与其他人类部落和野生食肉动物作战。我们假设,在狼和人类共同进化的合作家庭或部落中,驯化的第一个需要是狼的攻击性较低的行为,这些家庭或部落使用狼来增加竞争适应性优势。
{"title":"Altruism in wolves explains the coevolution of dogs and humans","authors":"P. Jouventin, Y. Christen, F. Dobson","doi":"10.4033/iee.2016.9.2.n","DOIUrl":"https://doi.org/10.4033/iee.2016.9.2.n","url":null,"abstract":"The date of historical domestication of dogs has been pushed back to between 15,000–30,000 years ago (estimates vary), a time when hunter-gatherer societies predominated in northern Europe and central Asia. We present insights from evolutionary behavioural ecology suggesting that wolves may have been “tricked” by their social evolution into contributing to the success of prehistoric human families or tribes. Four different wolves (one observed in great detail, as reported in recent book) that were raised by human families exhibited cooperative behaviours that protected their human “pack members.” Such hereditary altruistic behaviours may have been transferred by descent to the first dogs, which helped our ancestors hunt large animals and fight against other human tribes and wild carnivores. We hypothesize that the first need in domestication was for less aggressive wolf behaviour, within the wolf and human coevolution of the cooperative family or tribe that used wolves to increase their competitive fitness advantages.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4033/iee.2016.9.2.n","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Altruism in wolves explains the coevolution of dogs and wolves: A response to Jouventin, Christen, and Dobson 狼的利他主义解释了狗和狼的共同进化:对Jouventin, Christen和Dobson的回应
IF 0.2 Pub Date : 2016-06-13 DOI: 10.4033/IEE.2016.9.3.C
S. Fiset
Jouventin et al. suggest that altruistic behaviour in wolves, demonstrated by modern wolves towards their human caretakers, was exploited by prehistoric men and explains the possible coevolution of dogs and humans. In this response paper, I question their observations and propose alternative explanations for them. I also suggest various hypotheses that the authors need to explore in regards to the evolution of altruism behaviour in wolves towards humans. Finally, I also question how prehistoric humans could have raised wolf pups and why archeological evidence does not support this hypothesis.
Jouventin等人认为,狼的利他行为(现代狼对人类看护者的表现)被史前人类利用,并解释了狗和人类可能的共同进化。在这篇回应文章中,我对他们的观察提出了质疑,并提出了另一种解释。我还提出了作者需要探索的关于狼对人类利他行为进化的各种假设。最后,我还质疑史前人类是如何饲养狼崽的,以及为什么考古证据不支持这一假设。
{"title":"Altruism in wolves explains the coevolution of dogs and wolves: A response to Jouventin, Christen, and Dobson","authors":"S. Fiset","doi":"10.4033/IEE.2016.9.3.C","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.3.C","url":null,"abstract":"Jouventin et al. suggest that altruistic behaviour in wolves, demonstrated by modern wolves towards their human caretakers, was exploited by prehistoric men and explains the possible coevolution of dogs and humans. In this response paper, I question their observations and propose alternative explanations for them. I also suggest various hypotheses that the authors need to explore in regards to the evolution of altruism behaviour in wolves towards humans. Finally, I also question how prehistoric humans could have raised wolf pups and why archeological evidence does not support this hypothesis.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4033/IEE.2016.9.3.C","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Why Altmetric scores should never be used to measure the merit of scientific publications (or 'how to tweet your way to honour and glory') 为什么Altmetric分数不应该用来衡量科学出版物的价值(或者“如何通过推特获得荣誉和荣耀”)
IF 0.2 Pub Date : 2016-05-09 DOI: 10.4033/IEE.2016.9.1.E
D. Wardle
Because journal impact factors are widely recognized as a seriously flawed means of assessing the merit of a scientific paper (Seglen 1997), and because it takes time before it is known how well cited a scientific paper will become, there is a demand for metrics that can quantify a paper’s impact rapidly after publication. One prominent recent development is that of ‘altmetrics’ which capitalize on dissemination of the work via social media. The company ‘Altmetric’ provides an articlelevel score, presented within a multicoloured badge that quantifies the extent to which the work has been picked up by various social and other media outlets, including Twitter, Facebook and blogs. This score is placed prominently alongside the abstract of every paper published in the majority of the main ecological journals. Although the Altmetric company’s website cautions that one should not read too much into these scores without digging ‘deeper into the numbers and looking at the qualitative data underneath’, it also emphasizes that ‘Altmetrics are becoming widely used in academia, by individuals (as evidence of influence for promotion and tenure and in applying for grants), institutions (for benchmarking a university’s overall performance)’, and that the Altmetric badges (showcasing the scores) ‘provide a quick and easy way of showcasing the value of your publishing program to internal and external stakeholders, such as funding institutions and editorial boards’. Indeed, increasing numbers of researchers are making use of the Altmetric scores of their work in their CVs and applications for jobs and tenure, at least when they reflect favourably on the author. If Altmetric scores are to be used as a reliable indicator of the merit of a scientific publication, then it is critical that they cannot be gamed, and that they are entirely independent of the actions of the author postpublication. To test if this is the case, I conducted a simple analysis on the first 100 papers published in the journal Ecology in 2015. For each paper I noted the Altmetric score presented alongside the paper’s abstract. Because Altmetric scores for most papers are determined in large part by how many Twitter users ‘tweeted’ about the paper, I then examined the tweets for that paper and recorded whether or not the paper had been tweeted about by its own authors, i.e., from a Twitter account that the author has primary control over (such as their personal Twitter account, or lab-group Twitter account). This analysis reveals that publications which were tweeted about by their own authors had Altmetric scores of 3.3 times greater than did the others when mean values were considered, or 4.0 times greater when median values were used (Table 1). There are two possible explanations for this outcome. The first is that through tweeting about their own work, the authors generated publicity for it that greatly elevated its Altmetric score. While it is noted on the Altmetric website that they ‘count o
由于期刊影响因子被广泛认为是评估科学论文价值的一种有严重缺陷的方法(Seglen 1997),并且由于需要一段时间才能知道一篇科学论文将被引用得多好,因此需要在论文发表后迅速量化其影响的指标。最近一个突出的发展是“另类度量”,它利用了通过社交媒体传播的作品。“Altmetric”公司提供了一个文章等级评分,用一个彩色徽章显示,量化了文章被各种社交媒体和其他媒体(包括Twitter、Facebook和博客)转载的程度。在大多数主要的生态学期刊上发表的每一篇论文的摘要中,这个分数都放在显眼的位置。尽管Altmetric公司的网站警告说,在没有“更深入地研究数字和下面的定性数据”的情况下,人们不应该过度解读这些分数,但它也强调,“Altmetric正在被学术界、个人(作为晋升和终身职位以及申请资助的影响力的证据)、机构(作为衡量大学整体表现的基准)广泛使用。”Altmetric徽章(显示分数)“提供了一种快速简便的方式,向内部和外部利益相关者(如资助机构和编辑委员会)展示你的出版项目的价值”。事实上,越来越多的研究人员在他们的简历、工作申请和终身职位中使用Altmetric对他们工作的评分,至少在这些评分对作者有利时是这样。如果Altmetric评分被用作科学出版物价值的可靠指标,那么至关重要的是,它们不能被玩弄,并且它们完全独立于作者发表后的行为。为了验证这一点,我对2015年发表在《生态学》杂志上的前100篇论文进行了简单的分析。对于每篇论文,我都会在论文摘要旁边标注Altmetric分数。因为大多数论文的Altmetric分数在很大程度上取决于有多少Twitter用户“推特”了这篇论文,所以我随后检查了这篇论文的推特,并记录了这篇论文是否由其自己的作者推特,即作者拥有主要控制权的Twitter账户(如他们的个人Twitter账户,或实验室组Twitter账户)。该分析表明,当考虑平均值时,由自己的作者发布的出版物的Altmetric分数比其他出版物高3.3倍,或者当使用中位数时高4.0倍(表1)。对于这一结果有两种可能的解释。首先,通过在推特上发布自己的作品,作者们为自己的作品做了宣传,大大提高了它在Altmetric上的得分。虽然Altmetric网站上指出,他们“只把一个人算作一个消息来源”,但作者的每一个转发作者推文的“追随者”(很可能倾向于作者),以及这些追随者的追随者,大概都被视为独立的消息来源。这表明,那些在推特上发布自己作品的作者,仅凭转发量就能大大提高他们在Altmetric上的得分,尤其是如果他们有很多忠实粉丝的话。第二种可能的解释是,拥有Twitter账户并在Twitter上发布自己研究成果的作者,平均而言也是更好的研究人员,他们的工作更值得在Altmetric上获得高分。要做到这一点,就需要那些工作影响最大的科学家
{"title":"Why Altmetric scores should never be used to measure the merit of scientific publications (or 'how to tweet your way to honour and glory')","authors":"D. Wardle","doi":"10.4033/IEE.2016.9.1.E","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.1.E","url":null,"abstract":"Because journal impact factors are widely recognized as a seriously flawed means of assessing the merit of a scientific paper (Seglen 1997), and because it takes time before it is known how well cited a scientific paper will become, there is a demand for metrics that can quantify a paper’s impact rapidly after publication. One prominent recent development is that of ‘altmetrics’ which capitalize on dissemination of the work via social media. The company ‘Altmetric’ provides an articlelevel score, presented within a multicoloured badge that quantifies the extent to which the work has been picked up by various social and other media outlets, including Twitter, Facebook and blogs. This score is placed prominently alongside the abstract of every paper published in the majority of the main ecological journals. Although the Altmetric company’s website cautions that one should not read too much into these scores without digging ‘deeper into the numbers and looking at the qualitative data underneath’, it also emphasizes that ‘Altmetrics are becoming widely used in academia, by individuals (as evidence of influence for promotion and tenure and in applying for grants), institutions (for benchmarking a university’s overall performance)’, and that the Altmetric badges (showcasing the scores) ‘provide a quick and easy way of showcasing the value of your publishing program to internal and external stakeholders, such as funding institutions and editorial boards’. Indeed, increasing numbers of researchers are making use of the Altmetric scores of their work in their CVs and applications for jobs and tenure, at least when they reflect favourably on the author. If Altmetric scores are to be used as a reliable indicator of the merit of a scientific publication, then it is critical that they cannot be gamed, and that they are entirely independent of the actions of the author postpublication. To test if this is the case, I conducted a simple analysis on the first 100 papers published in the journal Ecology in 2015. For each paper I noted the Altmetric score presented alongside the paper’s abstract. Because Altmetric scores for most papers are determined in large part by how many Twitter users ‘tweeted’ about the paper, I then examined the tweets for that paper and recorded whether or not the paper had been tweeted about by its own authors, i.e., from a Twitter account that the author has primary control over (such as their personal Twitter account, or lab-group Twitter account). This analysis reveals that publications which were tweeted about by their own authors had Altmetric scores of 3.3 times greater than did the others when mean values were considered, or 4.0 times greater when median values were used (Table 1). There are two possible explanations for this outcome. The first is that through tweeting about their own work, the authors generated publicity for it that greatly elevated its Altmetric score. While it is noted on the Altmetric website that they ‘count o","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Three common sources of error in peer review and how to minimize them 同行评议中三个常见的错误来源以及如何减少它们
IF 0.2 Pub Date : 2016-01-01 DOI: 10.4033/IEE.2016.9.7.E
L. Aarssen
Researchers have an odd love-hate relationship with peer review. Most regard it as agonizing, but at the same time, necessary. Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest. Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through absurdity. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia—one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'. Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice. Some interesting data for context are provided from a recent analysis of peer-review results from the journal, Functional Ecology. Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list. Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers. Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark. This failure is undoubtedly not unique to Functional Ecology. It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model— where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here?— the fact that author-preferred reviewers rated papers more positively?— or the fact that editorselected reviewers rated papers more negatively? Either one could be a product of peer-review error, and at least three explanations could be involved:
研究人员对同行评议有一种奇怪的爱恨情仇。大多数人认为这是痛苦的,但同时也是必要的。同行评议当然是一件好事,因为它提供了预期的价值:淘汰垃圾论文,并改进其余的论文。然而,不幸的是,前者往往不是特别有效,而后者通常是在浪费了大量时间、跳圈和涉猎荒谬之后才出现的。也许我们之所以能忍受这些,只是因为这些辛劳和痛苦已经持续了太久,以至于它已经定义了学术界的文化——这种文化认为,除非它遭受和忍受了痛苦,否则任何贡献都不会被认真对待,从而赢得了令人垂涎的“同行评议出版物”的徽章。在这里,我认为从同行评议中获得认可的痛苦之路,以及它通常无法提供预期的价值,通常会因同行评议过程中的三个错误来源而恶化,所有这些错误都可以通过实践中的一些改变来最小化。《功能生态学》杂志最近对同行评议结果的分析提供了一些有趣的数据。像现在的许多期刊一样,《功能生态学》邀请投稿作者提供一份推荐的审稿人名单,编辑通常会从这个名单中邀请一些审稿人。Fox等人(2016)发现,作者偏好的审稿人对论文的评价要比编辑选择的审稿人积极得多,由作者偏好的审稿人审查的论文比编辑选择的审稿人审查的论文更有可能被邀请修改。很少有人会对这些发现感到惊讶,当然,我们有充分的理由担心同行评议的预期价值没有达到标准。这种失败无疑不是功能生态学所独有的。我怀疑,这可能是传统的单盲同行评审模式的一个系统性特征——审稿人知道作者是谁,而作者不知道审稿人是谁。关键的问题是:这里的失败信号是什么?——倾向于作者的审稿人对论文的评价更积极?还是编辑挑选的审稿人对论文的评价更负面?其中任何一种都可能是同行评议错误的产物,至少有三种解释可能涉及:
{"title":"Three common sources of error in peer review and how to minimize them","authors":"L. Aarssen","doi":"10.4033/IEE.2016.9.7.E","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.7.E","url":null,"abstract":"Researchers have an odd love-hate relationship with peer review. Most regard it as agonizing, but at the same time, necessary. Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest. Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through absurdity. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia—one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'. Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice. Some interesting data for context are provided from a recent analysis of peer-review results from the journal, Functional Ecology. Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list. Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers. Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark. This failure is undoubtedly not unique to Functional Ecology. It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model— where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here?— the fact that author-preferred reviewers rated papers more positively?— or the fact that editorselected reviewers rated papers more negatively? Either one could be a product of peer-review error, and at least three explanations could be involved:","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70236023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
To dendrogram or not? Consensus methods show that is the question needed to move functional diversity metrics forward 要不要用树突图?共识方法表明,这是推动功能多样性指标向前发展所需要的问题
IF 0.2 Pub Date : 2015-08-17 DOI: 10.4033/IEE.2015.8.12.N
M. Poesch
Functional diversity indices have become important tools for measuring variation in species characteristics that are relevant for ecosystem services. A frequently used dendrogram-based method for measuring functional diversity, ‘FD’, was shown to be sensitive to methodological choices in its calculation, and consensus methods have been suggested as an improvement. The objective of this study was to determine whether consensus methods can be used to reduce sensitivity when measuring FD. To calculate FD, a distance measure and a clustering method must be chosen. Using data from three natural communities, this study demonstrates that consensus methods were unable to resolve even simple choices of distance measure (Euclidean and cosine) and clustering method (UPGMA, complete and single linkage). Overall, there was low consensus, ranging from 41–45%, across choices inherent in functional diversity. Further, regardless of how FD was measured, or how many species were removed from the community, FD closely mirrored species richness.  Future research on the impact of methodological choices, including choices inherent in producing a dendrogram and the statistical complications they produce, are needed to move functional diversity metrics forward.
功能多样性指数已成为衡量与生态系统服务相关的物种特征变化的重要工具。一种常用的基于树形图的测量功能多样性的方法,“FD”,在其计算中被证明对方法选择很敏感,共识方法已经被建议作为一种改进。本研究的目的是确定是否可以使用共识方法来降低测量FD时的敏感性。为了计算FD,必须选择距离度量和聚类方法。使用三个自然群落的数据,本研究表明共识方法无法解决距离度量(欧几里得和余弦)和聚类方法(UPGMA,完整和单链接)的简单选择。总的来说,在功能多样性固有的选择上,共识度很低,在41-45%之间。此外,无论如何测量FD,或者从群落中移除多少物种,FD都密切反映了物种丰富度。未来需要对方法选择的影响进行研究,包括产生树形图的固有选择及其产生的统计复杂性,以推动功能多样性指标向前发展。
{"title":"To dendrogram or not? Consensus methods show that is the question needed to move functional diversity metrics forward","authors":"M. Poesch","doi":"10.4033/IEE.2015.8.12.N","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.12.N","url":null,"abstract":"Functional diversity indices have become important tools for measuring variation in species characteristics that are relevant for ecosystem services. A frequently used dendrogram-based method for measuring functional diversity, ‘FD’, was shown to be sensitive to methodological choices in its calculation, and consensus methods have been suggested as an improvement. The objective of this study was to determine whether consensus methods can be used to reduce sensitivity when measuring FD. To calculate FD, a distance measure and a clustering method must be chosen. Using data from three natural communities, this study demonstrates that consensus methods were unable to resolve even simple choices of distance measure (Euclidean and cosine) and clustering method (UPGMA, complete and single linkage). Overall, there was low consensus, ranging from 41–45%, across choices inherent in functional diversity. Further, regardless of how FD was measured, or how many species were removed from the community, FD closely mirrored species richness.  Future research on the impact of methodological choices, including choices inherent in producing a dendrogram and the statistical complications they produce, are needed to move functional diversity metrics forward.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"328 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The paradox of the Birds-of-Paradise: persistent hybridization as a signature of historical reinforcement 天堂鸟的悖论:作为历史强化标志的持续杂交
IF 0.2 Pub Date : 2015-08-04 DOI: 10.4033/IEE.2015.8.10.N
Paul R. Martin
The birds-of-paradise (Paradisaeidae) exhibit some of the most diverse color patterns and courtship displays among species. Paradoxically, birds-of-paradise hybridize more frequently than other birds, even hybridizing across species and genera with remarkably divergent color patterns. Hybridization among such distinctly colored species might suggest that reinforcement was unimportant for color pattern divergence because reinforcement favors trait divergence that reduces the likelihood of hybridization over time, and is expected to eliminate hybridization between species. Here I present an alternative view: that persistent but infrequent hybridization among species that differ markedly in prezygotic isolating traits, such as color pattern in birds, represents the signature of historical reinforcement, and occurs when (i) divergence in single traits can reduce, but not prevent, hybridization, (ii) trade-offs constrain the divergence of prezygotic isolating traits, and (iii) selection against hybrids is weak when hybrids are rare. Considering these factors, the paradox of the birds-of-paradise—where species with distinct prezygotic isolating traits are more likely to hybridize at low frequencies—is the expected outcome of reinforcement. Sexual selection by female choice could further intensify the effects of reinforcement, particularly if reinforcement directs sexual selection to different traits in hybridizing populations. This latter process could potentially explain the exceptional diversity of extravagant ornaments in the birds-of-paradise.
天堂鸟(Paradisaeidae)在物种中展示了一些最多样化的颜色图案和求爱方式。矛盾的是,天堂鸟的杂交比其他鸟类更频繁,甚至在颜色模式截然不同的物种和属之间杂交。这些颜色鲜明的物种之间的杂交可能表明,强化对颜色模式的差异并不重要,因为强化有利于性状差异,随着时间的推移,这种差异降低了杂交的可能性,并有望消除物种之间的杂交。在这里,我提出了另一种观点:在前合子分离性状(如鸟类的颜色图案)显著不同的物种之间持续但不频繁的杂交代表了历史强化的特征,并且发生在(I)单个性状的分歧可以减少但不能阻止杂交,(ii)权衡限制了前合子分离性状的分歧,以及(iii)当杂种稀少时,对杂种的选择很弱。考虑到这些因素,天堂鸟的悖论——具有明显的前合子分离特征的物种更有可能以低频率杂交——是强化的预期结果。雌性选择的性选择可以进一步加强强化的影响,特别是如果强化在杂交种群中指导性选择的不同特征。后一种过程可能解释了天堂鸟身上奢侈装饰的异常多样性。
{"title":"The paradox of the Birds-of-Paradise: persistent hybridization as a signature of historical reinforcement","authors":"Paul R. Martin","doi":"10.4033/IEE.2015.8.10.N","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.10.N","url":null,"abstract":"The birds-of-paradise (Paradisaeidae) exhibit some of the most diverse color patterns and courtship displays among species. Paradoxically, birds-of-paradise hybridize more frequently than other birds, even hybridizing across species and genera with remarkably divergent color patterns. Hybridization among such distinctly colored species might suggest that reinforcement was unimportant for color pattern divergence because reinforcement favors trait divergence that reduces the likelihood of hybridization over time, and is expected to eliminate hybridization between species. Here I present an alternative view: that persistent but infrequent hybridization among species that differ markedly in prezygotic isolating traits, such as color pattern in birds, represents the signature of historical reinforcement, and occurs when (i) divergence in single traits can reduce, but not prevent, hybridization, (ii) trade-offs constrain the divergence of prezygotic isolating traits, and (iii) selection against hybrids is weak when hybrids are rare. Considering these factors, the paradox of the birds-of-paradise—where species with distinct prezygotic isolating traits are more likely to hybridize at low frequencies—is the expected outcome of reinforcement. Sexual selection by female choice could further intensify the effects of reinforcement, particularly if reinforcement directs sexual selection to different traits in hybridizing populations. This latter process could potentially explain the exceptional diversity of extravagant ornaments in the birds-of-paradise.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"8 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4033/IEE.2015.8.10.N","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Selection for reinforcement versus selection for signals of quality and attractiveness 选择强化与选择质量和吸引力的信号
IF 0.2 Pub Date : 2015-08-04 DOI: 10.4033/IEE.2015.8.11.C
G. Hill
{"title":"Selection for reinforcement versus selection for signals of quality and attractiveness","authors":"G. Hill","doi":"10.4033/IEE.2015.8.11.C","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.11.C","url":null,"abstract":"","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"22 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Some thoughts on best publishing practices for scientific software 关于科学软件最佳出版实践的一些思考
IF 0.2 Pub Date : 2015-07-11 DOI: 10.4033/IEE.2015.8.9.C
E. White
It is increasingly recognized that software is central to much of science, and that rigorous approaches to software development are important for making sure that science is based on a solid foundation (Wilson et al. 2014). While there has been increasing discussion of the software development practices that lead to robust scientific software (e.g., Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014), figuring out how to actively encourage the use of these practices can be challenging. Poisot (2015) proposes a set of best practices to be included as part of the review process for software papers. These include automated testing, public test coverage statistics, continuous integration, release of code in citeable ways using DOIs, and documentation (Poisot 2015). These are all important recommendations that will help encourage the use of good practice in the development of scientific software (Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014). Requiring these approaches for publication of an associated software paper should help improve the robustness of published software (automated testing, continuous integration), its ease of use (documentation, continuous integration), and the potential for the scientific community to build on and contribute to existing efforts. As part of thinking about these best practices, Poisot (2015) grapples with one of the fundamental challenges of scientific software publication: how do we review scientific software? Most scientists are not trained in how to conduct code reviews (Petre and Wilson 2014) and the time commitment to do a full review of a moderately sized piece of software is substantial. In combination, this would make it very difficult to find reviewers for software papers if reviewers were expected to perform a thorough code review. Poisot joins Mills (2015) in suggesting that this task could be made more manageable by requiring all software submitted for publication to have automated testing with reasonably high coverage. While Mills (2015) suggests that this will “encourage researchers to use this fundamental technique for ensuring code quality”, Poisot takes the idea a step further by suggesting that reviewers could then focus on reviewing the tests to determine if the software does what it is intended to do when provided with known inputs. This approach isn’t perfect. Tests are necessarily limited in the inputs that are evaluated and mistakes can occur in tests as well as in the code itself. However, reviewing tests to determine whether they are sufficient and whether the code produces correct outcomes in at least some cases is, I think, much more tenable than reviewing an entire codebase line by line. It is one of the most reasonable solutions I have seen to the challenge of reviewing software. While I agree with all of the major recommendations made in Poisot (2015), I think the ideas related to making software citeable will benefit from further discussion. While the benefits
越来越多的人认识到,软件是许多科学的中心,严格的软件开发方法对于确保科学建立在坚实的基础上是很重要的(Wilson et al. 2014)。虽然关于软件开发实践的讨论越来越多,这些实践导致了健壮的科学软件(例如,Jackson等人,2011年,Osborne等人,2014年,Wilson等人,2014年),但弄清楚如何积极鼓励使用这些实践可能是具有挑战性的。Poisot(2015)提出了一套最佳实践,作为软件论文审查过程的一部分。这些包括自动化测试、公共测试覆盖统计、持续集成、使用doi以可引用的方式发布代码和文档(Poisot 2015)。这些都是有助于鼓励在科学软件开发中使用良好实践的重要建议(Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014)。需要这些方法来发表相关的软件论文应该有助于提高已发布软件的健壮性(自动化测试,持续集成),它的易用性(文档化,持续集成),以及科学界建立和贡献现有成果的潜力。作为思考这些最佳实践的一部分,Poisot(2015)努力解决科学软件出版的基本挑战之一:我们如何审查科学软件?大多数科学家没有接受过如何进行代码审查的培训(Petre和Wilson 2014),并且对中等大小的软件进行全面审查的时间承诺是实质性的。结合起来,如果审查者被期望执行彻底的代码审查,这将使找到软件论文的审查者变得非常困难。Poisot与Mills(2015)一起建议,通过要求所有提交出版的软件都具有相当高的覆盖率的自动化测试,可以使这项任务更易于管理。虽然Mills(2015)认为这将“鼓励研究人员使用这一基本技术来确保代码质量”,但Poisot进一步提出了这个想法,他建议审查人员可以专注于审查测试,以确定软件在提供已知输入时是否能够完成预期的工作。这种方法并不完美。测试在被评估的输入中必须受到限制,并且测试和代码本身都可能出现错误。然而,我认为,至少在某些情况下,审查测试以确定它们是否足够,以及代码是否产生正确的结果,比一行一行地审查整个代码库要站住脚得多。这是我见过的应对软件评审挑战的最合理的解决方案之一。虽然我同意在Poisot(2015)中提出的所有主要建议,但我认为与使软件可引用相关的想法将受益于进一步的讨论。虽然科学软件的可引用性的好处是显而易见的,但这种引用是否必须通过使用doi来实现却不太清楚。正如Poisot(2015)所指出的,在科学软件中使用doi有其优势。许多期刊接受带有doi的学术产品,将其纳入参考文献列表,这意味着软件与论文一样得到认可。这有助于以一种容易被学术奖励结构理解的方式为开发软件的科学家提供信誉。它也有可能使软件使用的文献计量分析更直接。然而,许多主要的科学软件产品并不为软件本身使用doi,而是更喜欢对软件的通用引用(例如,SymPy: SymPy Development Team)
{"title":"Some thoughts on best publishing practices for scientific software","authors":"E. White","doi":"10.4033/IEE.2015.8.9.C","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.9.C","url":null,"abstract":"It is increasingly recognized that software is central to much of science, and that rigorous approaches to software development are important for making sure that science is based on a solid foundation (Wilson et al. 2014). While there has been increasing discussion of the software development practices that lead to robust scientific software (e.g., Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014), figuring out how to actively encourage the use of these practices can be challenging. Poisot (2015) proposes a set of best practices to be included as part of the review process for software papers. These include automated testing, public test coverage statistics, continuous integration, release of code in citeable ways using DOIs, and documentation (Poisot 2015). These are all important recommendations that will help encourage the use of good practice in the development of scientific software (Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014). Requiring these approaches for publication of an associated software paper should help improve the robustness of published software (automated testing, continuous integration), its ease of use (documentation, continuous integration), and the potential for the scientific community to build on and contribute to existing efforts. As part of thinking about these best practices, Poisot (2015) grapples with one of the fundamental challenges of scientific software publication: how do we review scientific software? Most scientists are not trained in how to conduct code reviews (Petre and Wilson 2014) and the time commitment to do a full review of a moderately sized piece of software is substantial. In combination, this would make it very difficult to find reviewers for software papers if reviewers were expected to perform a thorough code review. Poisot joins Mills (2015) in suggesting that this task could be made more manageable by requiring all software submitted for publication to have automated testing with reasonably high coverage. While Mills (2015) suggests that this will “encourage researchers to use this fundamental technique for ensuring code quality”, Poisot takes the idea a step further by suggesting that reviewers could then focus on reviewing the tests to determine if the software does what it is intended to do when provided with known inputs. This approach isn’t perfect. Tests are necessarily limited in the inputs that are evaluated and mistakes can occur in tests as well as in the code itself. However, reviewing tests to determine whether they are sufficient and whether the code produces correct outcomes in at least some cases is, I think, much more tenable than reviewing an entire codebase line by line. It is one of the most reasonable solutions I have seen to the challenge of reviewing software. While I agree with all of the major recommendations made in Poisot (2015), I think the ideas related to making software citeable will benefit from further discussion. While the benefits ","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"8 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70236032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Best publishing practices to improve user confidence in scientific software 提高用户对科学软件信心的最佳发布实践
IF 0.2 Pub Date : 2015-07-11 DOI: 10.6084/M9.FIGSHARE.1434688.V1
T. Poisot
The practice of science is becoming increasingly reliant on software — despite the lack of formal 2 training (Hastings et al. 2014; Wilson et al. 2014), upwards of 30% of scientists need to develop 3 their own. In ecology and evolution, this resulted in several journals (notably Methods in Ecology 4 & Evolution, Ecography, BMC Ecology) creating specific sections for papers describing software 5 packages. This can only be viewed as a good thing, since the call to publish software in an open 6 way has been made several times (Barnes 2010), and is broadly viewed as a way towards greater 7 reproducibility (Ince et al. 2012). In addition, by providing a peer-reviewed, journal approved 8 venue, this change in editorial practices gives credit to scientists for whom software development 9 is a frequent research output. 10
科学实践越来越依赖于软件——尽管缺乏正式的培训(Hastings et al. 2014;Wilson et al. 2014),超过30%的科学家需要开发自己的。在生态学和进化中,这导致了一些期刊(特别是《生态学与进化方法》、《生态学》、《BMC生态学》)为描述软件包的论文创建了专门的章节。这只能被视为一件好事,因为以开放的方式发布软件的呼吁已经多次提出(Barnes 2010),并且被广泛视为一种更大的可重复性(Ince et al. 2012)。此外,通过提供同行评审、期刊批准的场所,编辑实践中的这种变化给了那些经常将软件开发作为研究成果的科学家们以荣誉。10
{"title":"Best publishing practices to improve user confidence in scientific software","authors":"T. Poisot","doi":"10.6084/M9.FIGSHARE.1434688.V1","DOIUrl":"https://doi.org/10.6084/M9.FIGSHARE.1434688.V1","url":null,"abstract":"The practice of science is becoming increasingly reliant on software — despite the lack of formal 2 training (Hastings et al. 2014; Wilson et al. 2014), upwards of 30% of scientists need to develop 3 their own. In ecology and evolution, this resulted in several journals (notably Methods in Ecology 4 & Evolution, Ecography, BMC Ecology) creating specific sections for papers describing software 5 packages. This can only be viewed as a good thing, since the call to publish software in an open 6 way has been made several times (Barnes 2010), and is broadly viewed as a way towards greater 7 reproducibility (Ince et al. 2012). In addition, by providing a peer-reviewed, journal approved 8 venue, this change in editorial practices gives credit to scientists for whom software development 9 is a frequent research output. 10","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"8 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71207174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
Ideas in Ecology and Evolution
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1