Open repositories were created to enhance access and visibility of scholarly publications, driven by open science ideals emphasising transparency and accessibility. However, they lack mechanisms to update the status of corrected or retracted publications, posing a threat to the integrity of the scholarly record. To explore the scope of the problem, a manually verified corpus was examined: we extracted all the entries in the Crossref × Retraction Watch database for which the publication date of the corrected or retracted document ranged from 2013 to 2023. This corresponded to 24,430 entries with a DOI, which we use to query Unpaywall and identify their possible indexing in HAL, an open repository (second largest institutional repository worldwide). In most cases (91%), HAL does not mention corrections. While the study needs broader scope, it highlights the necessity of improving the role of open repositories in correction processes with better curation practices. We discuss how harvesting operations and the interoperability of platforms can maintain the integrity of the entire scholarly record. Not only will the open repositories avoid damaging its reliability through ambiguous reporting, but on the contrary, they will also strengthen it.
{"title":"Moving Open Repositories out of the Blind Spot of Initiatives to Correct the Scholarly Record","authors":"Frédérique Bordignon","doi":"10.1002/leap.1655","DOIUrl":"https://doi.org/10.1002/leap.1655","url":null,"abstract":"<p>Open repositories were created to enhance access and visibility of scholarly publications, driven by open science ideals emphasising transparency and accessibility. However, they lack mechanisms to update the status of corrected or retracted publications, posing a threat to the integrity of the scholarly record. To explore the scope of the problem, a manually verified corpus was examined: we extracted all the entries in the Crossref × Retraction Watch database for which the publication date of the corrected or retracted document ranged from 2013 to 2023. This corresponded to 24,430 entries with a DOI, which we use to query Unpaywall and identify their possible indexing in HAL, an open repository (second largest institutional repository worldwide). In most cases (91%), HAL does not mention corrections. While the study needs broader scope, it highlights the necessity of improving the role of open repositories in correction processes with better curation practices. We discuss how harvesting operations and the interoperability of platforms can maintain the integrity of the entire scholarly record. Not only will the open repositories avoid damaging its reliability through ambiguous reporting, but on the contrary, they will also strengthen it.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"38 2","pages":""},"PeriodicalIF":2.2,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1655","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143120502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aims to evaluate the influence of the China Sci-Tech Journal Excellence Action Plan (CJEAP) on the development of Chinese science and technology (sci-tech) journals. The performance of bibliometric indicators of these journals before and after the implementation of this plan is examined. In particular, a discipline richness algorithm is employed to evaluate whether and how the funding plan affected the disciplinal coverage of publications. The results show that the influence of CJEAP funding in sci-tech journals published in China is not evenly distributed. Highly funded journals appear to have expanded both the range of research areas and disciplines, and the article volume, while poorly funded journals mainly focus on attracting manuscripts with higher scientific impact but display less expanded disciplinal range. New journals funded by CJEAP are characterised with high scientific levels focusing on highly sophisticated fields, but initially featuring a relatively small article volume. Notably, a positive relationship exists between the international collaboration rate and citation ranking score, and thus the expansion of article volume with manuscripts mainly from Chinese scholars may not be conducive for enhancing the international influence of journals published in China. In summary, our results indicate that CJEAP funding has elicited powerful influence in promoting sci-tech journals published in China, suggesting continuous funding support should be both necessary and efficient for further promoting the development of sci-tech journals published in China.
{"title":"Development of Sci-Tech Journals in China Stimulated by the China Sci-Tech Journal Excellence Action Plan","authors":"Chuwei Li, Yaping Li, Zuoqi Ding","doi":"10.1002/leap.1654","DOIUrl":"https://doi.org/10.1002/leap.1654","url":null,"abstract":"<p>This study aims to evaluate the influence of the China Sci-Tech Journal Excellence Action Plan (CJEAP) on the development of Chinese science and technology (sci-tech) journals. The performance of bibliometric indicators of these journals before and after the implementation of this plan is examined. In particular, a discipline richness algorithm is employed to evaluate whether and how the funding plan affected the disciplinal coverage of publications. The results show that the influence of CJEAP funding in sci-tech journals published in China is not evenly distributed. Highly funded journals appear to have expanded both the range of research areas and disciplines, and the article volume, while poorly funded journals mainly focus on attracting manuscripts with higher scientific impact but display less expanded disciplinal range. New journals funded by CJEAP are characterised with high scientific levels focusing on highly sophisticated fields, but initially featuring a relatively small article volume. Notably, a positive relationship exists between the international collaboration rate and citation ranking score, and thus the expansion of article volume with manuscripts mainly from Chinese scholars may not be conducive for enhancing the international influence of journals published in China. In summary, our results indicate that CJEAP funding has elicited powerful influence in promoting sci-tech journals published in China, suggesting continuous funding support should be both necessary and efficient for further promoting the development of sci-tech journals published in China.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1654","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143114030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we clarify the notions of plagiarism and self-plagiarism and show that a rather straightforward observation about these notions has important implications for the admissibility of recycling research outputs. The key point is that contextual variation must be taken into account in normative assessments of recycling research outputs, and we illustrate this with some examples. In particular, we apply the analysis in order to dissolve a disagreement about the proper handling of submissions to conferences. Some researchers are comfortable with sending the same contribution to several conferences, while others find that unacceptable and a clear deviation from good research practise. We take a closer look at the arguments regarding whether it is acceptable or not to make the same conference contribution more than once, including the argument that submitting the same contribution more than once would amount to self-plagiarism. We argue that contextual variation must be taken into account, in accordance with our previous analysis, and conclude that whether or not a duplication of a conference contribution deviates from good research practise depends on what significance is ascribed to it in the specific case. We conclude with some practical recommendations, emphasising for example, the importance of being explicit and clear on this point, and encourage conference organisers to provide opportunities to specify relevant facts in the submission.
{"title":"Recycling Research Without (Self-)Plagiarism: The Importance of Context and the Case of Conference Contributions","authors":"Gert Helgesson, Jonas Åkerman, Sara Belfrage","doi":"10.1002/leap.1653","DOIUrl":"https://doi.org/10.1002/leap.1653","url":null,"abstract":"<p>In this paper, we clarify the notions of plagiarism and self-plagiarism and show that a rather straightforward observation about these notions has important implications for the admissibility of recycling research outputs. The key point is that contextual variation must be taken into account in normative assessments of recycling research outputs, and we illustrate this with some examples. In particular, we apply the analysis in order to dissolve a disagreement about the proper handling of submissions to conferences. Some researchers are comfortable with sending the same contribution to several conferences, while others find that unacceptable and a clear deviation from good research practise. We take a closer look at the arguments regarding whether it is acceptable or not to make the same conference contribution more than once, including the argument that submitting the same contribution more than once would amount to self-plagiarism. We argue that contextual variation must be taken into account, in accordance with our previous analysis, and conclude that whether or not a duplication of a conference contribution deviates from good research practise depends on what significance is ascribed to it in the specific case. We conclude with some practical recommendations, emphasising for example, the importance of being explicit and clear on this point, and encourage conference organisers to provide opportunities to specify relevant facts in the submission.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1653","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143121372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are ample reasons why papers might get rejected by peer-reviewed journals, and the experience can be, especially for those who have had little experience, sobering. When papers get rejected a number of times, that may signal that there are problems with the paper (e.g., weak methodology or lack of robust analyses), that it is insufficiently developed, is poorly written, or that it is too topic-specific and needs to find an appropriate niche journal. In the case of a single or multiple rejections, whenever there is feedback from a journal, as well as reasons for rejection, this provides a useful signal for improving the paper before it is resubmitted to another journal. This article examines literature related to the rejection of papers in academic journals, encompassing the opinions and experiences offered by authors, as well as advice suggested by editors, allowing readers and authors who experience rejections to reflect on the possible reasons that may have led to that outcome. Many papers related to this topic were published as editorials or opinions, offering advice on how to improve aspects of a submitted paper in order to increase its chances of acceptance.
{"title":"Rejected papers in academic publishing: Turning negatives into positives to maximize paper acceptance","authors":"Jaime A. Teixeira da Silva, Maryna Nazarovets","doi":"10.1002/leap.1649","DOIUrl":"https://doi.org/10.1002/leap.1649","url":null,"abstract":"<p>There are ample reasons why papers might get rejected by peer-reviewed journals, and the experience can be, especially for those who have had little experience, sobering. When papers get rejected a number of times, that may signal that there are problems with the paper (e.g., weak methodology or lack of robust analyses), that it is insufficiently developed, is poorly written, or that it is too topic-specific and needs to find an appropriate niche journal. In the case of a single or multiple rejections, whenever there is feedback from a journal, as well as reasons for rejection, this provides a useful signal for improving the paper before it is resubmitted to another journal. This article examines literature related to the rejection of papers in academic journals, encompassing the opinions and experiences offered by authors, as well as advice suggested by editors, allowing readers and authors who experience rejections to reflect on the possible reasons that may have led to that outcome. Many papers related to this topic were published as editorials or opinions, offering advice on how to improve aspects of a submitted paper in order to increase its chances of acceptance.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1649","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Clark, David Nicholas, Marzena Swigon, Abdullah Abrizah, Blanca Rodríguez-Bravo, Jorge Revez, Eti Herman, Jie Xu, Anthony Watkinson
Presents the results of a study of the impact of artificial intelligence on early career researchers (ECRs). An important group to study because their millennial mindset may render them especially open to AI. We provide empirical data and a validity check of the numerous publications providing forecasts and prognostications. This interview-based study—part of the Harbingers project on ECRs—covers a convenience sample of 91 ECRs from all fields and seven countries using both qualitative and quantitative data to view the AI experience, engagement, utility, attitudes and representativeness of ECRs. We find that: (1) ECRs exhibit mostly limited or moderate levels of experience; (2) in regard to engagement and usage there is a divide with some ECRs exhibiting little or none and others enthusiastically using AI; (3) ECRs do not think they are unrepresentative when compared to their colleagues; (4) ECRs who score highly on these measures tend to be computer scientists, but not exclusively so; (5) the main concerns regarding AI were around authenticity, especially plagiarism; (6) a major attraction of AI is the automation of ‘wordsmithing’; the process and technique of composition and writing.
{"title":"Authors, wordsmiths and ghostwriters: Early career researchers' responses to artificial intelligence","authors":"David Clark, David Nicholas, Marzena Swigon, Abdullah Abrizah, Blanca Rodríguez-Bravo, Jorge Revez, Eti Herman, Jie Xu, Anthony Watkinson","doi":"10.1002/leap.1652","DOIUrl":"https://doi.org/10.1002/leap.1652","url":null,"abstract":"<p>Presents the results of a study of the impact of artificial intelligence on early career researchers (ECRs). An important group to study because their millennial mindset may render them especially open to AI. We provide empirical data and a validity check of the numerous publications providing forecasts and prognostications. This interview-based study—part of the Harbingers project on ECRs—covers a convenience sample of 91 ECRs from all fields and seven countries using both qualitative and quantitative data to view the AI experience, engagement, utility, attitudes and representativeness of ECRs. We find that: (1) ECRs exhibit mostly limited or moderate levels of experience; (2) in regard to engagement and usage there is a divide with some ECRs exhibiting little or none and others enthusiastically using AI; (3) ECRs do not think they are unrepresentative when compared to their colleagues; (4) ECRs who score highly on these measures tend to be computer scientists, but not exclusively so; (5) the main concerns regarding AI were around authenticity, especially plagiarism; (6) a major attraction of AI is the automation of ‘wordsmithing’; the process and technique of composition and writing.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1652","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this paper is to highlight the situation whereby content generated by the large language model ChatGPT is appearing in peer-reviewed papers in journals by recognized publishers. The paper demonstrates how to identify sections that indicate that a text fragment was generated, that is, entirely created, by ChatGPT. To prepare an illustrative compilation of papers that appear in journals indexed in the Web of Science and Scopus databases and possessing Impact Factor and CiteScore indicators, the SPAR4SLR method was used, which is mainly applied in systematic literature reviews. Three main findings are presented: in highly regarded premier journals, articles appear that bear the hallmarks of the content generated by AI large language models, whose use was not declared by the authors (1); many of these identified papers are already receiving citations from other scientific works, also placed in journals found in scientific databases (2); and, most of the identified papers belong to the disciplines of medicine and computer science, but there are also articles that belong to disciplines such as environmental science, engineering, sociology, education, economics and management (3). This paper aims to continue and add to the recently initiated discussion on the use of large language models like ChatGPT in the creation of scholarly works.
{"title":"‘As of my last knowledge update’: How is content generated by ChatGPT infiltrating scientific papers published in premier journals?","authors":"Artur Strzelecki","doi":"10.1002/leap.1650","DOIUrl":"https://doi.org/10.1002/leap.1650","url":null,"abstract":"<p>The aim of this paper is to highlight the situation whereby content generated by the large language model ChatGPT is appearing in peer-reviewed papers in journals by recognized publishers. The paper demonstrates how to identify sections that indicate that a text fragment was generated, that is, entirely created, by ChatGPT. To prepare an illustrative compilation of papers that appear in journals indexed in the Web of Science and Scopus databases and possessing Impact Factor and CiteScore indicators, the SPAR4SLR method was used, which is mainly applied in systematic literature reviews. Three main findings are presented: in highly regarded premier journals, articles appear that bear the hallmarks of the content generated by AI large language models, whose use was not declared by the authors (1); many of these identified papers are already receiving citations from other scientific works, also placed in journals found in scientific databases (2); and, most of the identified papers belong to the disciplines of medicine and computer science, but there are also articles that belong to disciplines such as environmental science, engineering, sociology, education, economics and management (3). This paper aims to continue and add to the recently initiated discussion on the use of large language models like ChatGPT in the creation of scholarly works.</p>","PeriodicalId":51636,"journal":{"name":"Learned Publishing","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/leap.1650","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}