Meeting: The 10th Anniversary Conference of the Korean Council of Science Editors session A Date: September 8, 2021 Venue: Zoom Organizer: Korean Council of Science Editors Content of Session A Manuscript editors’ role for the next decade: Duc Le (senior executive editor, The Lancet) How can local publishers survive in 10 years: Younsang Cho (CEO, M2PI) Preparation of Korean journal editors for the next 10 years: Cheol-Heui Yun (professor, Department of Agricultural Biotechnology, Seoul National University)
{"title":"Local editors have no time to lose for building their journals’ reputations","authors":"Byung-Mo Oh","doi":"10.6087/kcse.268","DOIUrl":"https://doi.org/10.6087/kcse.268","url":null,"abstract":"Meeting: The 10th Anniversary Conference of the Korean Council of Science Editors session A Date: September 8, 2021 Venue: Zoom Organizer: Korean Council of Science Editors Content of Session A Manuscript editors’ role for the next decade: Duc Le (senior executive editor, The Lancet) How can local publishers survive in 10 years: Younsang Cho (CEO, M2PI) Preparation of Korean journal editors for the next 10 years: Cheol-Heui Yun (professor, Department of Agricultural Biotechnology, Seoul National University)","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47180668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: Wordvice AI Proofreader is a recently developed web-based artificial intelligence-driven text processor that provides real-time automated proofreading and editing of user-input text. It aims to compare its accuracy and effectiveness to expert proofreading by human editors and two other popular proofreading applications—automated writing analysis tools of Google Docs, and Microsoft Word. Because this tool was primarily designed for use by academic authors to proofread their manuscript drafts, the comparison of this tool’s efficacy to other tools was intended to establish the usefulness of this particular field for these authors.Methods: We performed a comparative analysis of proofreading completed by the Wordvice AI Proofreader, by experienced human academic editors, and by two other popular proofreading applications. The number of errors accurately reported and the overall usefulness of the vocabulary suggestions was measured using a General Language Evaluation Understanding metric and open dataset comparisons.Results: In the majority of texts analyzed, the Wordvice AI Proofreader achieved performance levels at or near that of the human editors, identifying similar errors and offering comparable suggestions in the majority of sample passages. The Wordvice AI Proofreader also had higher performance and greater consistency than that of the other two proofreading applications evaluated.Conclusion: We found that the overall functionality of the Wordvice artificial intelligence proofreading tool is comparable to that of a human proofreader and equal or superior to that of two other programs with built-in automated writing evaluation proofreaders used by tens of millions of users: Google Docs and Microsoft Word.
{"title":"Comparing the accuracy and effectiveness of Wordvice AI Proofreader to two automated editing tools and human editors","authors":"Kevin Heintz, Young-Wan Roh, Jonghwan Lee","doi":"10.6087/kcse.261","DOIUrl":"https://doi.org/10.6087/kcse.261","url":null,"abstract":"Purpose: Wordvice AI Proofreader is a recently developed web-based artificial intelligence-driven text processor that provides real-time automated proofreading and editing of user-input text. It aims to compare its accuracy and effectiveness to expert proofreading by human editors and two other popular proofreading applications—automated writing analysis tools of Google Docs, and Microsoft Word. Because this tool was primarily designed for use by academic authors to proofread their manuscript drafts, the comparison of this tool’s efficacy to other tools was intended to establish the usefulness of this particular field for these authors.Methods: We performed a comparative analysis of proofreading completed by the Wordvice AI Proofreader, by experienced human academic editors, and by two other popular proofreading applications. The number of errors accurately reported and the overall usefulness of the vocabulary suggestions was measured using a General Language Evaluation Understanding metric and open dataset comparisons.Results: In the majority of texts analyzed, the Wordvice AI Proofreader achieved performance levels at or near that of the human editors, identifying similar errors and offering comparable suggestions in the majority of sample passages. The Wordvice AI Proofreader also had higher performance and greater consistency than that of the other two proofreading applications evaluated.Conclusion: We found that the overall functionality of the Wordvice artificial intelligence proofreading tool is comparable to that of a human proofreader and equal or superior to that of two other programs with built-in automated writing evaluation proofreaders used by tens of millions of users: Google Docs and Microsoft Word.","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":"1 1","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41499426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reflections on 4 years in the role of a Crossref ambassador in Korea","authors":"J. Chang","doi":"10.6087/kcse.266","DOIUrl":"https://doi.org/10.6087/kcse.266","url":null,"abstract":"","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48842354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sam T. Mathew, H. I. A. Razack, Prasanth Viswanathan
Purpose: This study aimed to develop a decision-support tool to quantitatively determine authorship in clinical trial publications.Methods: The tool was developed in three phases: consolidation of authorship recommendations from the Good Publication Practice (GPP) and International Committee of Medical Journal Editors (ICMJE) guidelines, identifying and scoring attributes using a 5-point Likert scale or a dichotomous scale, and soliciting feedback from editors and researchers.Results: The authorship criteria stipulated by the ICMJE and GPP recommendations were categorized into 2 Modules. Criterion 1 and the related GPP recommendations formed Module 1 (sub-criteria: contribution to design, data generation, and interpretation), while Module 2 was based on criteria 2 to 4 and the related GPP recommendations (sub-criteria: contribution to manuscript preparation and approval). The two modules with relevant sub-criteria were then differentiated into attributes (n = 17 in Module 1, n = 12 in Module 2). An individual contributor can be scored for each sub-criterion by summing the related attribute values; the sum of sub-criteria scores constituted the module score (Module 1 score: 70 [contribution to conception or design of the study, 20; data acquisition, 7; data analysis, 27; interpretation of data, 16]; Module 2 score: 50 [content development, 27; content review, 18; accountability, 5]). The concept was integrated into Microsoft Excel with adequate formulae and macros. A threshold of 50% for each sub-criterion and each module, with an overall score of 65%, is predefined as qualifying for authorship.Conclusion: This authorship decision-support tool would be helpful for clinical trial sponsors to assess and provide authorship to deserving contributors.
{"title":"Development of a decision-support tool to quantify authorship contributions in clinical trial publications","authors":"Sam T. Mathew, H. I. A. Razack, Prasanth Viswanathan","doi":"10.6087/kcse.259","DOIUrl":"https://doi.org/10.6087/kcse.259","url":null,"abstract":"Purpose: This study aimed to develop a decision-support tool to quantitatively determine authorship in clinical trial publications.Methods: The tool was developed in three phases: consolidation of authorship recommendations from the Good Publication Practice (GPP) and International Committee of Medical Journal Editors (ICMJE) guidelines, identifying and scoring attributes using a 5-point Likert scale or a dichotomous scale, and soliciting feedback from editors and researchers.Results: The authorship criteria stipulated by the ICMJE and GPP recommendations were categorized into 2 Modules. Criterion 1 and the related GPP recommendations formed Module 1 (sub-criteria: contribution to design, data generation, and interpretation), while Module 2 was based on criteria 2 to 4 and the related GPP recommendations (sub-criteria: contribution to manuscript preparation and approval). The two modules with relevant sub-criteria were then differentiated into attributes (n = 17 in Module 1, n = 12 in Module 2). An individual contributor can be scored for each sub-criterion by summing the related attribute values; the sum of sub-criteria scores constituted the module score (Module 1 score: 70 [contribution to conception or design of the study, 20; data acquisition, 7; data analysis, 27; interpretation of data, 16]; Module 2 score: 50 [content development, 27; content review, 18; accountability, 5]). The concept was integrated into Microsoft Excel with adequate formulae and macros. A threshold of 50% for each sub-criterion and each module, with an overall score of 65%, is predefined as qualifying for authorship.Conclusion: This authorship decision-support tool would be helpful for clinical trial sponsors to assess and provide authorship to deserving contributors.","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41941911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Emile, H. Hamid, S. Atıcı, Doga Nur Kosker, Mario Virgilio Papa, H. Elfeki, Chee Yang Tan, A. El‐Hussuna, S. Wexner
This review aimed to illustrate the types, limitations, and possible alternatives of peer review (PR) based on a literature review together with the opinions of a social media audience via Twitter. This study was conducted via the #OpenSourceResearch collaborative platform and combined a comprehensive literature search on the current PR system with the opinions of a social media audience of surgeons who are actively engaged in the current PR system. Six independent researchers conducted a literature search of electronic databases in addition to Google Scholar. Electronic polls were organized via Twitter to assess surgeons’ opinions on the current PR system and potential alternative approaches. PR can be classified into single-blind, double-blind, triple-blind, and open PR. Newer PR systems include interactive platforms, prepublication and postpublication commenting or review, transparent review, and collaborative review. The main limitations of the current PR system are its allegedly time-consuming nature and inconsistent, biased, and non-transparent results. Suggestions to improve the PR process include employing an interactive, double-blind PR system, using artificial intelligence to recruit reviewers, providing incentives for reviewers, and using PR templates. The above results offer several concepts for possible alternative approaches and modifications to this critically important process.
{"title":"Types, limitations, and possible alternatives of peer review based on the literature and surgeons’ opinions via Twitter: a narrative review","authors":"S. Emile, H. Hamid, S. Atıcı, Doga Nur Kosker, Mario Virgilio Papa, H. Elfeki, Chee Yang Tan, A. El‐Hussuna, S. Wexner","doi":"10.6087/kcse.257","DOIUrl":"https://doi.org/10.6087/kcse.257","url":null,"abstract":"This review aimed to illustrate the types, limitations, and possible alternatives of peer review (PR) based on a literature review together with the opinions of a social media audience via Twitter. This study was conducted via the #OpenSourceResearch collaborative platform and combined a comprehensive literature search on the current PR system with the opinions of a social media audience of surgeons who are actively engaged in the current PR system. Six independent researchers conducted a literature search of electronic databases in addition to Google Scholar. Electronic polls were organized via Twitter to assess surgeons’ opinions on the current PR system and potential alternative approaches. PR can be classified into single-blind, double-blind, triple-blind, and open PR. Newer PR systems include interactive platforms, prepublication and postpublication commenting or review, transparent review, and collaborative review. The main limitations of the current PR system are its allegedly time-consuming nature and inconsistent, biased, and non-transparent results. Suggestions to improve the PR process include employing an interactive, double-blind PR system, using artificial intelligence to recruit reviewers, providing incentives for reviewers, and using PR templates. The above results offer several concepts for possible alternative approaches and modifications to this critically important process.","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47328022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Role of Crossref in journal publishing over the next decade","authors":"E. Pentz","doi":"10.6087/kcse.263","DOIUrl":"https://doi.org/10.6087/kcse.263","url":null,"abstract":"","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48682338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose: This study explored changes in the journal publishing market by publisher and access type using the major journals that publish about 95% of Journal Citation Reports (JCR) articles.Methods: From JCR 2016, 2018, and 2020, a unique journal list by publisher was created in Excel and used to analyze the compound annual growth rate by pivot tables. In total, 10,953 major JCR journals were analyzed, focusing on publisher type, open access (OA) status, and mega journals (publishing over 1,000 articles per year).Results: Among the 19 publishers that published over 10,000 articles per year, in JCR 2020, six large publishers published 59.6% of the articles and 13 publishers 22.5%. The other publishers published 17.9%. Large and OA publishers increased their article share through leading mega journals, but the remaining publishers showed the opposite tendency. In JCR 2020, mega journals had a 26.5% article share and an excellent distribution in terms of the Journal Impact Factor quartile. Despite the high growth (22.6%) and share (26.0%) of OA articles, the natural growth of non-OA articles (7.3%) and total articles (10.7%) caused a rise in journal subscription fees. Articles, citations, the impact factor, and the immediacy index all increased gradually, and the compound annual growth rate of the average immediacy index was almost double than that of the average impact factor in JCR 2020.Conclusion: The influence of OA publishers has grown under the dominance of large publishers, and mega journals may substantially change the journal market. Journal stakeholders should pay attention to these changes.
{"title":"Changes in article share and growth by publisher and access type in Journal Citation Reports 2016, 2018, and 2020","authors":"Sang-Jun Kim, K. Park","doi":"10.6087/kcse.260","DOIUrl":"https://doi.org/10.6087/kcse.260","url":null,"abstract":"Purpose: This study explored changes in the journal publishing market by publisher and access type using the major journals that publish about 95% of Journal Citation Reports (JCR) articles.Methods: From JCR 2016, 2018, and 2020, a unique journal list by publisher was created in Excel and used to analyze the compound annual growth rate by pivot tables. In total, 10,953 major JCR journals were analyzed, focusing on publisher type, open access (OA) status, and mega journals (publishing over 1,000 articles per year).Results: Among the 19 publishers that published over 10,000 articles per year, in JCR 2020, six large publishers published 59.6% of the articles and 13 publishers 22.5%. The other publishers published 17.9%. Large and OA publishers increased their article share through leading mega journals, but the remaining publishers showed the opposite tendency. In JCR 2020, mega journals had a 26.5% article share and an excellent distribution in terms of the Journal Impact Factor quartile. Despite the high growth (22.6%) and share (26.0%) of OA articles, the natural growth of non-OA articles (7.3%) and total articles (10.7%) caused a rise in journal subscription fees. Articles, citations, the impact factor, and the immediacy index all increased gradually, and the compound annual growth rate of the average immediacy index was almost double than that of the average impact factor in JCR 2020.Conclusion: The influence of OA publishers has grown under the dominance of large publishers, and mega journals may substantially change the journal market. Journal stakeholders should pay attention to these changes.","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45733326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data are a highly valuable asset for researchers. Earlier, researchers who conducted a study permanently owned their data. Currently, however, these data can be used as a source for performing further research. In 2018, the International Committee of Medical Journal Editors presented data sharing statements for clinical trials. Although this recommendation was limited to clinical trials published in medical journals, it is a meaningful change that formalized the concept of data sharing. However, the trend of data sharing is expected to spread beyond medical journals to include a wider range of scientific journals in the near future. Correspondingly, platforms that provide storage and services to share data will gradually diversify. The World Journal of Men’s Health has adopted a clinical data sharing policy. The data deposit process to Harvard Dataverse, a well-known data repository, is as follows: first, select the type of article for data sharing; second, create an account; third, write a letter to the corresponding author; fourth, receive and validate data from the authors; fifth, upload the data to Harvard Dataverse; and sixth, add a data sharing statement to the paper. It is recommended that scientific journal editors select an appropriate platform and participate in the new trend of data sharing.
{"title":"How to share data through Harvard Dataverse, a repository site: a case of the World Journal of Men’s Health","authors":"Hyun Jun Park","doi":"10.6087/kcse.270","DOIUrl":"https://doi.org/10.6087/kcse.270","url":null,"abstract":"Data are a highly valuable asset for researchers. Earlier, researchers who conducted a study permanently owned their data. Currently, however, these data can be used as a source for performing further research. In 2018, the International Committee of Medical Journal Editors presented data sharing statements for clinical trials. Although this recommendation was limited to clinical trials published in medical journals, it is a meaningful change that formalized the concept of data sharing. However, the trend of data sharing is expected to spread beyond medical journals to include a wider range of scientific journals in the near future. Correspondingly, platforms that provide storage and services to share data will gradually diversify. The World Journal of Men’s Health has adopted a clinical data sharing policy. The data deposit process to Harvard Dataverse, a well-known data repository, is as follows: first, select the type of article for data sharing; second, create an account; third, write a letter to the corresponding author; fourth, receive and validate data from the authors; fifth, upload the data to Harvard Dataverse; and sixth, add a data sharing statement to the paper. It is recommended that scientific journal editors select an appropriate platform and participate in the new trend of data sharing.","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43046664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Declaration of conflict of interest for editorial board members’ articles","authors":"","doi":"10.6087/kcse.272","DOIUrl":"https://doi.org/10.6087/kcse.272","url":null,"abstract":"","PeriodicalId":43802,"journal":{"name":"Science Editing","volume":" ","pages":""},"PeriodicalIF":1.5,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43027325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}