I began writing this paper with some trepidation as I can imagine many readers asking: What makes you an expert on providing advice to young scientists? I do not claim to have any special expertise in this area, other than the practical experience I have gained from mentoring a large number of young scientists over the last three decades. My ideas on this topic have been refined over the past few years when, after being awarded several teaching and mentoring awards, I began receiving invitations to provide talks and workshops focused on mentoring young scientists. To date, I have provided presentations on this topic on five continents, indicating broad interests in these issues. The impetus for this commentary was reinforced further when the Editor of Ideas in Ecology and Evolution recently listened to one of my presentations and invited me to provide this perspective. The 13 points of advice outlined below summarize some of the main topics I have attempted to develop in my various workshops and presentations to young scientists. These points have evolved over time, and were modified following discussions with students and mentors. I certainly do not claim that any of them are highly original, but they represent what I believe to be practical suggestions and points for discussion.
{"title":"Some advice to early career scientists: Personal perspectives on surviving in a complex world","authors":"J. Smol","doi":"10.4033/IEE.2016.9.5.E","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.5.E","url":null,"abstract":"I began writing this paper with some trepidation as I can imagine many readers asking: What makes you an expert on providing advice to young scientists? I do not claim to have any special expertise in this area, other than the practical experience I have gained from mentoring a large number of young scientists over the last three decades. My ideas on this topic have been refined over the past few years when, after being awarded several teaching and mentoring awards, I began receiving invitations to provide talks and workshops focused on mentoring young scientists. To date, I have provided presentations on this topic on five continents, indicating broad interests in these issues. The impetus for this commentary was reinforced further when the Editor of Ideas in Ecology and Evolution recently listened to one of my presentations and invited me to provide this perspective. The 13 points of advice outlined below summarize some of the main topics I have attempted to develop in my various workshops and presentations to young scientists. These points have evolved over time, and were modified following discussions with students and mentors. I certainly do not claim that any of them are highly original, but they represent what I believe to be practical suggestions and points for discussion.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The date of historical domestication of dogs has been pushed back to between 15,000–30,000 years ago (estimates vary), a time when hunter-gatherer societies predominated in northern Europe and central Asia. We present insights from evolutionary behavioural ecology suggesting that wolves may have been “tricked” by their social evolution into contributing to the success of prehistoric human families or tribes. Four different wolves (one observed in great detail, as reported in recent book) that were raised by human families exhibited cooperative behaviours that protected their human “pack members.” Such hereditary altruistic behaviours may have been transferred by descent to the first dogs, which helped our ancestors hunt large animals and fight against other human tribes and wild carnivores. We hypothesize that the first need in domestication was for less aggressive wolf behaviour, within the wolf and human coevolution of the cooperative family or tribe that used wolves to increase their competitive fitness advantages.
{"title":"Altruism in wolves explains the coevolution of dogs and humans","authors":"P. Jouventin, Y. Christen, F. Dobson","doi":"10.4033/iee.2016.9.2.n","DOIUrl":"https://doi.org/10.4033/iee.2016.9.2.n","url":null,"abstract":"The date of historical domestication of dogs has been pushed back to between 15,000–30,000 years ago (estimates vary), a time when hunter-gatherer societies predominated in northern Europe and central Asia. We present insights from evolutionary behavioural ecology suggesting that wolves may have been “tricked” by their social evolution into contributing to the success of prehistoric human families or tribes. Four different wolves (one observed in great detail, as reported in recent book) that were raised by human families exhibited cooperative behaviours that protected their human “pack members.” Such hereditary altruistic behaviours may have been transferred by descent to the first dogs, which helped our ancestors hunt large animals and fight against other human tribes and wild carnivores. We hypothesize that the first need in domestication was for less aggressive wolf behaviour, within the wolf and human coevolution of the cooperative family or tribe that used wolves to increase their competitive fitness advantages.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4033/iee.2016.9.2.n","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jouventin et al. suggest that altruistic behaviour in wolves, demonstrated by modern wolves towards their human caretakers, was exploited by prehistoric men and explains the possible coevolution of dogs and humans. In this response paper, I question their observations and propose alternative explanations for them. I also suggest various hypotheses that the authors need to explore in regards to the evolution of altruism behaviour in wolves towards humans. Finally, I also question how prehistoric humans could have raised wolf pups and why archeological evidence does not support this hypothesis.
{"title":"Altruism in wolves explains the coevolution of dogs and wolves: A response to Jouventin, Christen, and Dobson","authors":"S. Fiset","doi":"10.4033/IEE.2016.9.3.C","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.3.C","url":null,"abstract":"Jouventin et al. suggest that altruistic behaviour in wolves, demonstrated by modern wolves towards their human caretakers, was exploited by prehistoric men and explains the possible coevolution of dogs and humans. In this response paper, I question their observations and propose alternative explanations for them. I also suggest various hypotheses that the authors need to explore in regards to the evolution of altruism behaviour in wolves towards humans. Finally, I also question how prehistoric humans could have raised wolf pups and why archeological evidence does not support this hypothesis.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4033/IEE.2016.9.3.C","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Because journal impact factors are widely recognized as a seriously flawed means of assessing the merit of a scientific paper (Seglen 1997), and because it takes time before it is known how well cited a scientific paper will become, there is a demand for metrics that can quantify a paper’s impact rapidly after publication. One prominent recent development is that of ‘altmetrics’ which capitalize on dissemination of the work via social media. The company ‘Altmetric’ provides an articlelevel score, presented within a multicoloured badge that quantifies the extent to which the work has been picked up by various social and other media outlets, including Twitter, Facebook and blogs. This score is placed prominently alongside the abstract of every paper published in the majority of the main ecological journals. Although the Altmetric company’s website cautions that one should not read too much into these scores without digging ‘deeper into the numbers and looking at the qualitative data underneath’, it also emphasizes that ‘Altmetrics are becoming widely used in academia, by individuals (as evidence of influence for promotion and tenure and in applying for grants), institutions (for benchmarking a university’s overall performance)’, and that the Altmetric badges (showcasing the scores) ‘provide a quick and easy way of showcasing the value of your publishing program to internal and external stakeholders, such as funding institutions and editorial boards’. Indeed, increasing numbers of researchers are making use of the Altmetric scores of their work in their CVs and applications for jobs and tenure, at least when they reflect favourably on the author. If Altmetric scores are to be used as a reliable indicator of the merit of a scientific publication, then it is critical that they cannot be gamed, and that they are entirely independent of the actions of the author postpublication. To test if this is the case, I conducted a simple analysis on the first 100 papers published in the journal Ecology in 2015. For each paper I noted the Altmetric score presented alongside the paper’s abstract. Because Altmetric scores for most papers are determined in large part by how many Twitter users ‘tweeted’ about the paper, I then examined the tweets for that paper and recorded whether or not the paper had been tweeted about by its own authors, i.e., from a Twitter account that the author has primary control over (such as their personal Twitter account, or lab-group Twitter account). This analysis reveals that publications which were tweeted about by their own authors had Altmetric scores of 3.3 times greater than did the others when mean values were considered, or 4.0 times greater when median values were used (Table 1). There are two possible explanations for this outcome. The first is that through tweeting about their own work, the authors generated publicity for it that greatly elevated its Altmetric score. While it is noted on the Altmetric website that they ‘count o
{"title":"Why Altmetric scores should never be used to measure the merit of scientific publications (or 'how to tweet your way to honour and glory')","authors":"D. Wardle","doi":"10.4033/IEE.2016.9.1.E","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.1.E","url":null,"abstract":"Because journal impact factors are widely recognized as a seriously flawed means of assessing the merit of a scientific paper (Seglen 1997), and because it takes time before it is known how well cited a scientific paper will become, there is a demand for metrics that can quantify a paper’s impact rapidly after publication. One prominent recent development is that of ‘altmetrics’ which capitalize on dissemination of the work via social media. The company ‘Altmetric’ provides an articlelevel score, presented within a multicoloured badge that quantifies the extent to which the work has been picked up by various social and other media outlets, including Twitter, Facebook and blogs. This score is placed prominently alongside the abstract of every paper published in the majority of the main ecological journals. Although the Altmetric company’s website cautions that one should not read too much into these scores without digging ‘deeper into the numbers and looking at the qualitative data underneath’, it also emphasizes that ‘Altmetrics are becoming widely used in academia, by individuals (as evidence of influence for promotion and tenure and in applying for grants), institutions (for benchmarking a university’s overall performance)’, and that the Altmetric badges (showcasing the scores) ‘provide a quick and easy way of showcasing the value of your publishing program to internal and external stakeholders, such as funding institutions and editorial boards’. Indeed, increasing numbers of researchers are making use of the Altmetric scores of their work in their CVs and applications for jobs and tenure, at least when they reflect favourably on the author. If Altmetric scores are to be used as a reliable indicator of the merit of a scientific publication, then it is critical that they cannot be gamed, and that they are entirely independent of the actions of the author postpublication. To test if this is the case, I conducted a simple analysis on the first 100 papers published in the journal Ecology in 2015. For each paper I noted the Altmetric score presented alongside the paper’s abstract. Because Altmetric scores for most papers are determined in large part by how many Twitter users ‘tweeted’ about the paper, I then examined the tweets for that paper and recorded whether or not the paper had been tweeted about by its own authors, i.e., from a Twitter account that the author has primary control over (such as their personal Twitter account, or lab-group Twitter account). This analysis reveals that publications which were tweeted about by their own authors had Altmetric scores of 3.3 times greater than did the others when mean values were considered, or 4.0 times greater when median values were used (Table 1). There are two possible explanations for this outcome. The first is that through tweeting about their own work, the authors generated publicity for it that greatly elevated its Altmetric score. While it is noted on the Altmetric website that they ‘count o","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Researchers have an odd love-hate relationship with peer review. Most regard it as agonizing, but at the same time, necessary. Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest. Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through absurdity. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia—one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'. Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice. Some interesting data for context are provided from a recent analysis of peer-review results from the journal, Functional Ecology. Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list. Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers. Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark. This failure is undoubtedly not unique to Functional Ecology. It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model— where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here?— the fact that author-preferred reviewers rated papers more positively?— or the fact that editorselected reviewers rated papers more negatively? Either one could be a product of peer-review error, and at least three explanations could be involved:
{"title":"Three common sources of error in peer review and how to minimize them","authors":"L. Aarssen","doi":"10.4033/IEE.2016.9.7.E","DOIUrl":"https://doi.org/10.4033/IEE.2016.9.7.E","url":null,"abstract":"Researchers have an odd love-hate relationship with peer review. Most regard it as agonizing, but at the same time, necessary. Peer review is of course a good thing when it provides the value that is expected of it: weeding out junk papers, and improving the rest. Unfortunately, however, the former often doesn't work particularly well, and when the latter works, it usually happens only after a lot of wasted time, hoop-jumping and wading through absurdity. Perhaps we put up with this simply because the toil and pain of it all has been sustained for so long that it has come to define the culture of academia—one that believes that no contribution can be taken seriously unless it has suffered and endured the pain, and thus earned the coveted badge of 'peer-reviewed publication'. Here, I argue that the painful route to endorsement payoff from peer review, and its common failure to provide the value expected of it, are routinely exacerbated by three sources of error in the peer-review process, all of which can be minimized with some changes in practice. Some interesting data for context are provided from a recent analysis of peer-review results from the journal, Functional Ecology. Like many journals now, Functional Ecology invites submitting authors to include a list of suggested reviewers for their manuscripts, and editors commonly invite some of their reviewers from this list. Fox et al. (2016) found that author-preferred reviewers rated papers much more positively than did editor-selected reviewers, and papers reviewed by author-preferred reviewers were much more likely to be invited for revision than were papers reviewed by editor-selected reviewers. Few will be surprised by these findings, and there is good reason to be concerned of course that the expected value from peer review here has missed the mark. This failure is undoubtedly not unique to Functional Ecology. It is, I suspect, likely to be a systemic feature of the traditional single-blind peer-review model— where reviewers know who the authors are, but not vice versa. The critical question is: what is the signal of failure here?— the fact that author-preferred reviewers rated papers more positively?— or the fact that editorselected reviewers rated papers more negatively? Either one could be a product of peer-review error, and at least three explanations could be involved:","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"9 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2016-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70236023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Functional diversity indices have become important tools for measuring variation in species characteristics that are relevant for ecosystem services. A frequently used dendrogram-based method for measuring functional diversity, ‘FD’, was shown to be sensitive to methodological choices in its calculation, and consensus methods have been suggested as an improvement. The objective of this study was to determine whether consensus methods can be used to reduce sensitivity when measuring FD. To calculate FD, a distance measure and a clustering method must be chosen. Using data from three natural communities, this study demonstrates that consensus methods were unable to resolve even simple choices of distance measure (Euclidean and cosine) and clustering method (UPGMA, complete and single linkage). Overall, there was low consensus, ranging from 41–45%, across choices inherent in functional diversity. Further, regardless of how FD was measured, or how many species were removed from the community, FD closely mirrored species richness. Future research on the impact of methodological choices, including choices inherent in producing a dendrogram and the statistical complications they produce, are needed to move functional diversity metrics forward.
{"title":"To dendrogram or not? Consensus methods show that is the question needed to move functional diversity metrics forward","authors":"M. Poesch","doi":"10.4033/IEE.2015.8.12.N","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.12.N","url":null,"abstract":"Functional diversity indices have become important tools for measuring variation in species characteristics that are relevant for ecosystem services. A frequently used dendrogram-based method for measuring functional diversity, ‘FD’, was shown to be sensitive to methodological choices in its calculation, and consensus methods have been suggested as an improvement. The objective of this study was to determine whether consensus methods can be used to reduce sensitivity when measuring FD. To calculate FD, a distance measure and a clustering method must be chosen. Using data from three natural communities, this study demonstrates that consensus methods were unable to resolve even simple choices of distance measure (Euclidean and cosine) and clustering method (UPGMA, complete and single linkage). Overall, there was low consensus, ranging from 41–45%, across choices inherent in functional diversity. Further, regardless of how FD was measured, or how many species were removed from the community, FD closely mirrored species richness. Future research on the impact of methodological choices, including choices inherent in producing a dendrogram and the statistical complications they produce, are needed to move functional diversity metrics forward.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"328 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The birds-of-paradise (Paradisaeidae) exhibit some of the most diverse color patterns and courtship displays among species. Paradoxically, birds-of-paradise hybridize more frequently than other birds, even hybridizing across species and genera with remarkably divergent color patterns. Hybridization among such distinctly colored species might suggest that reinforcement was unimportant for color pattern divergence because reinforcement favors trait divergence that reduces the likelihood of hybridization over time, and is expected to eliminate hybridization between species. Here I present an alternative view: that persistent but infrequent hybridization among species that differ markedly in prezygotic isolating traits, such as color pattern in birds, represents the signature of historical reinforcement, and occurs when (i) divergence in single traits can reduce, but not prevent, hybridization, (ii) trade-offs constrain the divergence of prezygotic isolating traits, and (iii) selection against hybrids is weak when hybrids are rare. Considering these factors, the paradox of the birds-of-paradise—where species with distinct prezygotic isolating traits are more likely to hybridize at low frequencies—is the expected outcome of reinforcement. Sexual selection by female choice could further intensify the effects of reinforcement, particularly if reinforcement directs sexual selection to different traits in hybridizing populations. This latter process could potentially explain the exceptional diversity of extravagant ornaments in the birds-of-paradise.
{"title":"The paradox of the Birds-of-Paradise: persistent hybridization as a signature of historical reinforcement","authors":"Paul R. Martin","doi":"10.4033/IEE.2015.8.10.N","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.10.N","url":null,"abstract":"The birds-of-paradise (Paradisaeidae) exhibit some of the most diverse color patterns and courtship displays among species. Paradoxically, birds-of-paradise hybridize more frequently than other birds, even hybridizing across species and genera with remarkably divergent color patterns. Hybridization among such distinctly colored species might suggest that reinforcement was unimportant for color pattern divergence because reinforcement favors trait divergence that reduces the likelihood of hybridization over time, and is expected to eliminate hybridization between species. Here I present an alternative view: that persistent but infrequent hybridization among species that differ markedly in prezygotic isolating traits, such as color pattern in birds, represents the signature of historical reinforcement, and occurs when (i) divergence in single traits can reduce, but not prevent, hybridization, (ii) trade-offs constrain the divergence of prezygotic isolating traits, and (iii) selection against hybrids is weak when hybrids are rare. Considering these factors, the paradox of the birds-of-paradise—where species with distinct prezygotic isolating traits are more likely to hybridize at low frequencies—is the expected outcome of reinforcement. Sexual selection by female choice could further intensify the effects of reinforcement, particularly if reinforcement directs sexual selection to different traits in hybridizing populations. This latter process could potentially explain the exceptional diversity of extravagant ornaments in the birds-of-paradise.","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"8 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.4033/IEE.2015.8.10.N","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Selection for reinforcement versus selection for signals of quality and attractiveness","authors":"G. Hill","doi":"10.4033/IEE.2015.8.11.C","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.11.C","url":null,"abstract":"","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"22 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70235197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is increasingly recognized that software is central to much of science, and that rigorous approaches to software development are important for making sure that science is based on a solid foundation (Wilson et al. 2014). While there has been increasing discussion of the software development practices that lead to robust scientific software (e.g., Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014), figuring out how to actively encourage the use of these practices can be challenging. Poisot (2015) proposes a set of best practices to be included as part of the review process for software papers. These include automated testing, public test coverage statistics, continuous integration, release of code in citeable ways using DOIs, and documentation (Poisot 2015). These are all important recommendations that will help encourage the use of good practice in the development of scientific software (Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014). Requiring these approaches for publication of an associated software paper should help improve the robustness of published software (automated testing, continuous integration), its ease of use (documentation, continuous integration), and the potential for the scientific community to build on and contribute to existing efforts. As part of thinking about these best practices, Poisot (2015) grapples with one of the fundamental challenges of scientific software publication: how do we review scientific software? Most scientists are not trained in how to conduct code reviews (Petre and Wilson 2014) and the time commitment to do a full review of a moderately sized piece of software is substantial. In combination, this would make it very difficult to find reviewers for software papers if reviewers were expected to perform a thorough code review. Poisot joins Mills (2015) in suggesting that this task could be made more manageable by requiring all software submitted for publication to have automated testing with reasonably high coverage. While Mills (2015) suggests that this will “encourage researchers to use this fundamental technique for ensuring code quality”, Poisot takes the idea a step further by suggesting that reviewers could then focus on reviewing the tests to determine if the software does what it is intended to do when provided with known inputs. This approach isn’t perfect. Tests are necessarily limited in the inputs that are evaluated and mistakes can occur in tests as well as in the code itself. However, reviewing tests to determine whether they are sufficient and whether the code produces correct outcomes in at least some cases is, I think, much more tenable than reviewing an entire codebase line by line. It is one of the most reasonable solutions I have seen to the challenge of reviewing software. While I agree with all of the major recommendations made in Poisot (2015), I think the ideas related to making software citeable will benefit from further discussion. While the benefits
越来越多的人认识到,软件是许多科学的中心,严格的软件开发方法对于确保科学建立在坚实的基础上是很重要的(Wilson et al. 2014)。虽然关于软件开发实践的讨论越来越多,这些实践导致了健壮的科学软件(例如,Jackson等人,2011年,Osborne等人,2014年,Wilson等人,2014年),但弄清楚如何积极鼓励使用这些实践可能是具有挑战性的。Poisot(2015)提出了一套最佳实践,作为软件论文审查过程的一部分。这些包括自动化测试、公共测试覆盖统计、持续集成、使用doi以可引用的方式发布代码和文档(Poisot 2015)。这些都是有助于鼓励在科学软件开发中使用良好实践的重要建议(Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014)。需要这些方法来发表相关的软件论文应该有助于提高已发布软件的健壮性(自动化测试,持续集成),它的易用性(文档化,持续集成),以及科学界建立和贡献现有成果的潜力。作为思考这些最佳实践的一部分,Poisot(2015)努力解决科学软件出版的基本挑战之一:我们如何审查科学软件?大多数科学家没有接受过如何进行代码审查的培训(Petre和Wilson 2014),并且对中等大小的软件进行全面审查的时间承诺是实质性的。结合起来,如果审查者被期望执行彻底的代码审查,这将使找到软件论文的审查者变得非常困难。Poisot与Mills(2015)一起建议,通过要求所有提交出版的软件都具有相当高的覆盖率的自动化测试,可以使这项任务更易于管理。虽然Mills(2015)认为这将“鼓励研究人员使用这一基本技术来确保代码质量”,但Poisot进一步提出了这个想法,他建议审查人员可以专注于审查测试,以确定软件在提供已知输入时是否能够完成预期的工作。这种方法并不完美。测试在被评估的输入中必须受到限制,并且测试和代码本身都可能出现错误。然而,我认为,至少在某些情况下,审查测试以确定它们是否足够,以及代码是否产生正确的结果,比一行一行地审查整个代码库要站住脚得多。这是我见过的应对软件评审挑战的最合理的解决方案之一。虽然我同意在Poisot(2015)中提出的所有主要建议,但我认为与使软件可引用相关的想法将受益于进一步的讨论。虽然科学软件的可引用性的好处是显而易见的,但这种引用是否必须通过使用doi来实现却不太清楚。正如Poisot(2015)所指出的,在科学软件中使用doi有其优势。许多期刊接受带有doi的学术产品,将其纳入参考文献列表,这意味着软件与论文一样得到认可。这有助于以一种容易被学术奖励结构理解的方式为开发软件的科学家提供信誉。它也有可能使软件使用的文献计量分析更直接。然而,许多主要的科学软件产品并不为软件本身使用doi,而是更喜欢对软件的通用引用(例如,SymPy: SymPy Development Team)
{"title":"Some thoughts on best publishing practices for scientific software","authors":"E. White","doi":"10.4033/IEE.2015.8.9.C","DOIUrl":"https://doi.org/10.4033/IEE.2015.8.9.C","url":null,"abstract":"It is increasingly recognized that software is central to much of science, and that rigorous approaches to software development are important for making sure that science is based on a solid foundation (Wilson et al. 2014). While there has been increasing discussion of the software development practices that lead to robust scientific software (e.g., Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014), figuring out how to actively encourage the use of these practices can be challenging. Poisot (2015) proposes a set of best practices to be included as part of the review process for software papers. These include automated testing, public test coverage statistics, continuous integration, release of code in citeable ways using DOIs, and documentation (Poisot 2015). These are all important recommendations that will help encourage the use of good practice in the development of scientific software (Jackson et al. 2011, Osborne et al. 2014, Wilson et al. 2014). Requiring these approaches for publication of an associated software paper should help improve the robustness of published software (automated testing, continuous integration), its ease of use (documentation, continuous integration), and the potential for the scientific community to build on and contribute to existing efforts. As part of thinking about these best practices, Poisot (2015) grapples with one of the fundamental challenges of scientific software publication: how do we review scientific software? Most scientists are not trained in how to conduct code reviews (Petre and Wilson 2014) and the time commitment to do a full review of a moderately sized piece of software is substantial. In combination, this would make it very difficult to find reviewers for software papers if reviewers were expected to perform a thorough code review. Poisot joins Mills (2015) in suggesting that this task could be made more manageable by requiring all software submitted for publication to have automated testing with reasonably high coverage. While Mills (2015) suggests that this will “encourage researchers to use this fundamental technique for ensuring code quality”, Poisot takes the idea a step further by suggesting that reviewers could then focus on reviewing the tests to determine if the software does what it is intended to do when provided with known inputs. This approach isn’t perfect. Tests are necessarily limited in the inputs that are evaluated and mistakes can occur in tests as well as in the code itself. However, reviewing tests to determine whether they are sufficient and whether the code produces correct outcomes in at least some cases is, I think, much more tenable than reviewing an entire codebase line by line. It is one of the most reasonable solutions I have seen to the challenge of reviewing software. While I agree with all of the major recommendations made in Poisot (2015), I think the ideas related to making software citeable will benefit from further discussion. While the benefits ","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"8 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70236032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2015-07-11DOI: 10.6084/M9.FIGSHARE.1434688.V1
T. Poisot
The practice of science is becoming increasingly reliant on software — despite the lack of formal 2 training (Hastings et al. 2014; Wilson et al. 2014), upwards of 30% of scientists need to develop 3 their own. In ecology and evolution, this resulted in several journals (notably Methods in Ecology 4 & Evolution, Ecography, BMC Ecology) creating specific sections for papers describing software 5 packages. This can only be viewed as a good thing, since the call to publish software in an open 6 way has been made several times (Barnes 2010), and is broadly viewed as a way towards greater 7 reproducibility (Ince et al. 2012). In addition, by providing a peer-reviewed, journal approved 8 venue, this change in editorial practices gives credit to scientists for whom software development 9 is a frequent research output. 10
科学实践越来越依赖于软件——尽管缺乏正式的培训(Hastings et al. 2014;Wilson et al. 2014),超过30%的科学家需要开发自己的。在生态学和进化中,这导致了一些期刊(特别是《生态学与进化方法》、《生态学》、《BMC生态学》)为描述软件包的论文创建了专门的章节。这只能被视为一件好事,因为以开放的方式发布软件的呼吁已经多次提出(Barnes 2010),并且被广泛视为一种更大的可重复性(Ince et al. 2012)。此外,通过提供同行评审、期刊批准的场所,编辑实践中的这种变化给了那些经常将软件开发作为研究成果的科学家们以荣誉。10
{"title":"Best publishing practices to improve user confidence in scientific software","authors":"T. Poisot","doi":"10.6084/M9.FIGSHARE.1434688.V1","DOIUrl":"https://doi.org/10.6084/M9.FIGSHARE.1434688.V1","url":null,"abstract":"The practice of science is becoming increasingly reliant on software — despite the lack of formal 2 training (Hastings et al. 2014; Wilson et al. 2014), upwards of 30% of scientists need to develop 3 their own. In ecology and evolution, this resulted in several journals (notably Methods in Ecology 4 & Evolution, Ecography, BMC Ecology) creating specific sections for papers describing software 5 packages. This can only be viewed as a good thing, since the call to publish software in an open 6 way has been made several times (Barnes 2010), and is broadly viewed as a way towards greater 7 reproducibility (Ince et al. 2012). In addition, by providing a peer-reviewed, journal approved 8 venue, this change in editorial practices gives credit to scientists for whom software development 9 is a frequent research output. 10","PeriodicalId":42755,"journal":{"name":"Ideas in Ecology and Evolution","volume":"8 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2015-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71207174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}