Pub Date : 2024-12-01Epub Date: 2023-12-26DOI: 10.1080/08989621.2023.2276169
Lisa M Rasmussen
{"title":"New collaborative statement by bioethics journal editors on generative AI use.","authors":"Lisa M Rasmussen","doi":"10.1080/08989621.2023.2276169","DOIUrl":"10.1080/08989621.2023.2276169","url":null,"abstract":"","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1"},"PeriodicalIF":3.4,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71428793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2022-08-18DOI: 10.1080/08989621.2022.2111257
Keisha S Ray, Perry Zurn, Jordan D Dworkin, Dani S Bassett, David B Resnik
How often a researcher is cited usually plays a decisive role in that person's career advancement, because academic institutions often use citation metrics, either explicitly or implicitly, to estimate research impact and productivity. Research has shown, however, that citation patterns and practices are affected by various biases, including the prestige of the authors being cited and their gender, race, and nationality, whether self-attested or perceived. Some commentators have proposed that researchers can address biases related to social identity or position by including a Citation Diversity Statement in a manuscript submitted for publication. A Citation Diversity Statement is a paragraph placed before the reference section of a manuscript in which the authors address the diversity and equitability of their references in terms of gender, race, ethnicity, or other factors and affirm a commitment to promoting equity and diversity in sources and references. The present commentary considers arguments in favor of Citation Diversity Statements, and some practical and ethical issues that these statements raise.
{"title":"Citation bias, diversity, and ethics.","authors":"Keisha S Ray, Perry Zurn, Jordan D Dworkin, Dani S Bassett, David B Resnik","doi":"10.1080/08989621.2022.2111257","DOIUrl":"10.1080/08989621.2022.2111257","url":null,"abstract":"<p><p>How often a researcher is cited usually plays a decisive role in that person's career advancement, because academic institutions often use citation metrics, either explicitly or implicitly, to estimate research impact and productivity. Research has shown, however, that citation patterns and practices are affected by various biases, including the prestige of the authors being cited and their gender, race, and nationality, whether self-attested or perceived. Some commentators have proposed that researchers can address biases related to social identity or position by including a Citation Diversity Statement in a manuscript submitted for publication. A Citation Diversity Statement is a paragraph placed before the reference section of a manuscript in which the authors address the diversity and equitability of their references in terms of gender, race, ethnicity, or other factors and affirm a commitment to promoting equity and diversity in sources and references. The present commentary considers arguments in favor of Citation Diversity Statements, and some practical and ethical issues that these statements raise.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"158-172"},"PeriodicalIF":3.4,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9938084/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10799570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2022-06-07DOI: 10.1080/08989621.2022.2082289
Lisa Cosgrove, Barbara Mintzes, Harold J Bursztajn, Gianna D'Ambrozio, Allen F Shaughnessy
A vigorously debated issue in the psychiatric literature is whether long-acting injectable antipsychotics (LAIs) show clinical benefit over antipsychotics taken orally. In addressing this question, it is critical that systematic reviews incorporate risk of bias assessments of trial data in a robust way and are free of undue industry influence. In this paper, we present a case analysis in which we identify some of the design problems in a recent systematic review on LAIs vs oral formulations. This case illustrates how evidence syntheses that are shaped by commercial interests may undermine patient-centered models of recovery and care. We offer recommendations that address both the bioethical and research design issues that arise in the systematic review process when researchers have financial conflicts of interest.
{"title":"Industry effects on evidence: a case study of long-acting injectable antipsychotics.","authors":"Lisa Cosgrove, Barbara Mintzes, Harold J Bursztajn, Gianna D'Ambrozio, Allen F Shaughnessy","doi":"10.1080/08989621.2022.2082289","DOIUrl":"10.1080/08989621.2022.2082289","url":null,"abstract":"<p><p>A vigorously debated issue in the psychiatric literature is whether long-acting injectable antipsychotics (LAIs) show clinical benefit over antipsychotics taken orally. In addressing this question, it is critical that systematic reviews incorporate risk of bias assessments of trial data in a robust way and are free of undue industry influence. In this paper, we present a case analysis in which we identify some of the design problems in a recent systematic review on LAIs vs oral formulations. This case illustrates how evidence syntheses that are shaped by commercial interests may undermine patient-centered models of recovery and care. We offer recommendations that address both the bioethical and research design issues that arise in the systematic review process when researchers have financial conflicts of interest.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":"1 1","pages":"2-13"},"PeriodicalIF":3.4,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45127193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2022-06-26DOI: 10.1080/08989621.2022.2082290
Alison Avenell, Mark J Bolland, Greg D Gamble, Andrew Grey
Retracted clinical trials may be influential in citing systematic reviews and clinical guidelines. We assessed the influence of 27 retracted trials on systematic reviews and clinical guidelines (citing publications), then alerted authors to these retractions. Citing publications were randomized to up to three e-mails to contact author with/without up to two coauthors, with/without the editor. After one year we assessed corrective action. We included 88 citing publications; 51% (45/88) had findings likely to change if retracted trials were removed, 87% (39/45) likely substantially. 51% (44/86) of contacted citing publications replied. Including three authors rather than the contact author alone was more likely to elicit a reply (P = 0.03). Including the editor did not increase replies (P = 0.66). Whether findings were judged likely to change, and size of the likely change, had no effect on response rate or action taken. One year after e-mails were sent only nine publications had published notifications. E-Mail alerts to authors and editors are inadequate to correct the impact of retracted publications in citing systematic reviews and guidelines. Changes to bibliographic and referencing systems, and submission processes are needed. Citing publications with retracted citations should be marked until authors resolve concerns.
{"title":"A randomized trial alerting authors, with or without coauthors or editors, that research they cited in systematic reviews and guidelines has been retracted.","authors":"Alison Avenell, Mark J Bolland, Greg D Gamble, Andrew Grey","doi":"10.1080/08989621.2022.2082290","DOIUrl":"10.1080/08989621.2022.2082290","url":null,"abstract":"<p><p>Retracted clinical trials may be influential in citing systematic reviews and clinical guidelines. We assessed the influence of 27 retracted trials on systematic reviews and clinical guidelines (citing publications), then alerted authors to these retractions. Citing publications were randomized to up to three e-mails to contact author with/without up to two coauthors, with/without the editor. After one year we assessed corrective action. We included 88 citing publications; 51% (45/88) had findings likely to change if retracted trials were removed, 87% (39/45) likely substantially. 51% (44/86) of contacted citing publications replied. Including three authors rather than the contact author alone was more likely to elicit a reply (P = 0.03). Including the editor did not increase replies (P = 0.66). Whether findings were judged likely to change, and size of the likely change, had no effect on response rate or action taken. One year after e-mails were sent only nine publications had published notifications. E-Mail alerts to authors and editors are inadequate to correct the impact of retracted publications in citing systematic reviews and guidelines. Changes to bibliographic and referencing systems, and submission processes are needed. Citing publications with retracted citations should be marked until authors resolve concerns.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"14-37"},"PeriodicalIF":3.4,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10796932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2022-09-01DOI: 10.1080/08989621.2022.2116318
Linda Brubaker, Jesse Nodora, Tamara Bavendam, John Connett, Amy M Claussen, Cora E Lewis, Kyle Rudser, Siobhan Sutcliffe, Jean F Wyman, Janis M Miller
Authorship and dissemination policies vary across NIH research consortia. We aimed to describe elements of real-life policies in use by eligible U01 clinical research consortia. Principal investigators of eligible, active U01 clinical research projects identified in the NIH Research Portfolio Online Reporting Tools database shared relevant policies. The characteristics of key policy elements, determined a priori, were reviewed and quantified, when appropriate. Twenty one of 81 research projects met search criteria and provided policies. K elements (e.g., in quotations): "manuscript proposals reviewed and approved by committee" (90%); "guidelines for acknowledgements" (86%); "writing team formation" (71%); "process for final manuscript review and approval" (71%), "responsibilities for lead author" (67%), "guidelines for other types of publications" (67%); "draft manuscript review and approval" (62%); "recommendation for number of members per consortium site" (57%); and "requirement to identify individual contributions in the manuscript" (19%). Authorship/dissemination policies for large team science research projects are highly variable. Creation of an NIH policies repository and accompanying toolkit with model language and recommended key elements could improve comprehensiveness, ethical integrity, and efficiency in team science work while reducing burden and cost on newly funded consortia and directing time and resources to scientific endeavors.
{"title":"A policy toolkit for authorship and dissemination policies may benefit NIH research consortia.","authors":"Linda Brubaker, Jesse Nodora, Tamara Bavendam, John Connett, Amy M Claussen, Cora E Lewis, Kyle Rudser, Siobhan Sutcliffe, Jean F Wyman, Janis M Miller","doi":"10.1080/08989621.2022.2116318","DOIUrl":"10.1080/08989621.2022.2116318","url":null,"abstract":"<p><p>Authorship and dissemination policies vary across NIH research consortia. We aimed to describe elements of real-life policies in use by eligible U01 clinical research consortia. Principal investigators of eligible, active U01 clinical research projects identified in the NIH Research Portfolio Online Reporting Tools database shared relevant policies. The characteristics of key policy elements, determined a priori, were reviewed and quantified, when appropriate. Twenty one of 81 research projects met search criteria and provided policies. K elements (e.g., in quotations): \"manuscript proposals reviewed and approved by committee\" (90%); \"guidelines for acknowledgements\" (86%); \"writing team formation\" (71%); \"process for final manuscript review and approval\" (71%), \"responsibilities for lead author\" (67%), \"guidelines for other types of publications\" (67%); \"draft manuscript review and approval\" (62%); \"recommendation for number of members per consortium site\" (57%); and \"requirement to identify individual contributions in the manuscript\" (19%). Authorship/dissemination policies for large team science research projects are highly variable. Creation of an NIH policies repository and accompanying toolkit with model language and recommended key elements could improve comprehensiveness, ethical integrity, and efficiency in team science work while reducing burden and cost on newly funded consortia and directing time and resources to scientific endeavors.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"222-240"},"PeriodicalIF":3.4,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9975116/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9502988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2022-08-18DOI: 10.1080/08989621.2022.2112572
Andrew Grey, Alison Avenell, Mark J Bolland
Expressions of concern (EoC) can reduce the adverse effects of unreliable publications by alerting readers to concerns about publication integrity while assessment is undertaken. We investigated the use of EoC for 463 publications by two research groups for which we notified concerns about publication integrity to 142 journals and 44 publishers between March 2013 and February 2020. By December 2021, 95 papers had had an EoC, and 83 were retracted without an EoC. Median times from notification of concerns to EoC (10.4mo) or retraction without EoC (13.1mo) were similar. Among the 95 EoCs, 29 (30.5%) were followed by retraction after a median of 5.4mo, none was lifted, and 66 (69.5%) remained in place after a median of 18.1mo. Publishers with >10 notified publications issued EoCs for 0-81.8% of papers: for several publishers the proportions of notified papers for which EoCs were issued varied considerably between the 2 research groups. EoCs were issued for >30% of notified publications of randomized clinical trials and letters to the editor, and <20% of other types of research. These results demonstrate inconsistent application of EoCs between and within publishers, and prolonged times to issue and resolve EoCs.
{"title":"Procrastination and inconsistency: Expressions of concern for publications with compromised integrity.","authors":"Andrew Grey, Alison Avenell, Mark J Bolland","doi":"10.1080/08989621.2022.2112572","DOIUrl":"10.1080/08989621.2022.2112572","url":null,"abstract":"<p><p>Expressions of concern (EoC) can reduce the adverse effects of unreliable publications by alerting readers to concerns about publication integrity while assessment is undertaken. We investigated the use of EoC for 463 publications by two research groups for which we notified concerns about publication integrity to 142 journals and 44 publishers between March 2013 and February 2020. By December 2021, 95 papers had had an EoC, and 83 were retracted without an EoC. Median times from notification of concerns to EoC (10.4mo) or retraction without EoC (13.1mo) were similar. Among the 95 EoCs, 29 (30.5%) were followed by retraction after a median of 5.4mo, none was lifted, and 66 (69.5%) remained in place after a median of 18.1mo. Publishers with >10 notified publications issued EoCs for 0-81.8% of papers: for several publishers the proportions of notified papers for which EoCs were issued varied considerably between the 2 research groups. EoCs were issued for >30% of notified publications of randomized clinical trials and letters to the editor, and <20% of other types of research. These results demonstrate inconsistent application of EoCs between and within publishers, and prolonged times to issue and resolve EoCs.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"196-209"},"PeriodicalIF":3.4,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9344381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-11DOI: 10.1080/08989621.2024.2428205
Jennifer A Byrne, Adrian G Barnett
Research is conducted in workplaces that can present safety hazards. Where researchers work in laboratories, safety hazards can arise through the need to operate complex equipment that can become unsafe if faulty or broken. The research literature also represents a workplace for millions of scientists and scholars, where publications can be considered as key research equipment. This article compares our current capacity to flag and repair faulty equipment in research laboratories versus the literature. Whereas laboratory researchers can place written notices on faulty and broken equipment to flag problems and the need for repairs, researchers have limited capacity to flag faulty research publications to other users. We argue that our current inability to flag erroneous publications quickly and at scale, combined with the lack of real-world incentives for journals and publishers to direct adequate resources toward post-publication corrections, results in the research literature representing an increasingly unsafe workplace. We describe possible solutions, such as the capacity to transfer signed PubPeer notices describing verifiable errors to relevant publications, and the reactivation of PubMed Commons.
{"title":"The research literature is an unsafe workplace.","authors":"Jennifer A Byrne, Adrian G Barnett","doi":"10.1080/08989621.2024.2428205","DOIUrl":"https://doi.org/10.1080/08989621.2024.2428205","url":null,"abstract":"<p><p>Research is conducted in workplaces that can present safety hazards. Where researchers work in laboratories, safety hazards can arise through the need to operate complex equipment that can become unsafe if faulty or broken. The research literature also represents a workplace for millions of scientists and scholars, where publications can be considered as key research equipment. This article compares our current capacity to flag and repair faulty equipment in research laboratories versus the literature. Whereas laboratory researchers can place written notices on faulty and broken equipment to flag problems and the need for repairs, researchers have limited capacity to flag faulty research publications to other users. We argue that our current inability to flag erroneous publications quickly and at scale, combined with the lack of real-world incentives for journals and publishers to direct adequate resources toward post-publication corrections, results in the research literature representing an increasingly unsafe workplace. We describe possible solutions, such as the capacity to transfer signed PubPeer notices describing verifiable errors to relevant publications, and the reactivation of PubMed Commons.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-8"},"PeriodicalIF":2.8,"publicationDate":"2024-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
More commonly today, research ethics and misconduct are ideas that are frequently violated. The availability of information sources and the dissemination of awareness among researchers can help to reduce this kind of violation. This study highlights how YouTube can be used to promote discussions of research misconduct and ethics. The study looked into how many videos there are on research ethics and misconduct, which colleges actively provide such videos, and how satisfied viewers are with the available videos by analyzing comments. Various software tools, including Webometric Analyst, R-studio, and Microsoft Excel, were applied for data collection and analysis. On 01-24-2023, 515 videos and 6984 comments were retrieved using the correct search queries that is "Research ethics" OR "Research misconduct" OR "Research conduct" OR "Scientific integrity" OR "Research integrity" OR "Scientific misconduct." Results indicate that 2020 was the most significant year, since the most videos (241) were posted in this year. The channels titled "PPIRCPSC, ABRIZAH A, and ALHOORI H" upload 10, 9, and 8 videos respectively, placing them in the first, second, and third positions. By analyzing viewer comments, it was determined that the majority of comments were favorable, indicating that viewers are generally pleased with the available videos.
{"title":"Does YouTube promote research ethics and conduct? A content analysis of Youtube Videos and analysis of sentiments through viewers comments.","authors":"Lulu Rout, Praliva Priyadarsini Khilar, Bijayalaxmi Rout","doi":"10.1080/08989621.2023.2192404","DOIUrl":"10.1080/08989621.2023.2192404","url":null,"abstract":"<p><p>More commonly today, research ethics and misconduct are ideas that are frequently violated. The availability of information sources and the dissemination of awareness among researchers can help to reduce this kind of violation. This study highlights how YouTube can be used to promote discussions of research misconduct and ethics. The study looked into how many videos there are on research ethics and misconduct, which colleges actively provide such videos, and how satisfied viewers are with the available videos by analyzing comments. Various software tools, including Webometric Analyst, R-studio, and Microsoft Excel, were applied for data collection and analysis. On 01-24-2023, 515 videos and 6984 comments were retrieved using the correct search queries that is \"Research ethics\" OR \"Research misconduct\" OR \"Research conduct\" OR \"Scientific integrity\" OR \"Research integrity\" OR \"Scientific misconduct.\" Results indicate that 2020 was the most significant year, since the most videos (241) were posted in this year. The channels titled \"PPIRCPSC, ABRIZAH A, and ALHOORI H\" upload 10, 9, and 8 videos respectively, placing them in the first, second, and third positions. By analyzing viewer comments, it was determined that the majority of comments were favorable, indicating that viewers are generally pleased with the available videos.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1024-1043"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9166219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-03-03DOI: 10.1080/08989621.2023.2185141
Lex Bouter
Research data mismanagement (RDMM) is a serious threat to accountability, reproducibility, and re-use of data. In a recent article in this journal, it was argued that RDMM can take two forms: intentional research misconduct or unintentional questionable research practice (QRP). I disagree because the scale for severity of consequences of research misbehavior is not bimodal. Furthermore, intentionality is difficult to prove beyond doubt and is only one of many criteria that should be taken into account when deciding on the severity of a breach of research integrity and whether a sanction is justified. Making a distinction between RDMM that is research misconduct and RDMM which not puts too much emphasis on intentionality and sanctioning. The focus should rather be on improving data management practices by preventive actions, in which research institutions should take a leading role.
{"title":"Research misconduct and questionable research practices form a continuum.","authors":"Lex Bouter","doi":"10.1080/08989621.2023.2185141","DOIUrl":"10.1080/08989621.2023.2185141","url":null,"abstract":"<p><p>Research data mismanagement (RDMM) is a serious threat to accountability, reproducibility, and re-use of data. In a recent article in this journal, it was argued that RDMM can take two forms: intentional research misconduct or unintentional questionable research practice (QRP). I disagree because the scale for severity of consequences of research misbehavior is not bimodal. Furthermore, intentionality is difficult to prove beyond doubt and is only one of many criteria that should be taken into account when deciding on the severity of a breach of research integrity and whether a sanction is justified. Making a distinction between RDMM that is research misconduct and RDMM which not puts too much emphasis on intentionality and sanctioning. The focus should rather be on improving data management practices by preventive actions, in which research institutions should take a leading role.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1255-1259"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9385920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-01Epub Date: 2023-02-21DOI: 10.1080/08989621.2023.2180359
Gengyan Tang
This letter to the editor argues that if academic journals are willing to accept papers that include NLP-generated content under certain conditions, editorial policies should clarify the proportion of NLP-generated content in the paper. Excessive use of NLP-generated content should be considered as academic misconduct.
{"title":"Letter to editor: Academic journals should clarify the proportion of NLP-generated content in papers.","authors":"Gengyan Tang","doi":"10.1080/08989621.2023.2180359","DOIUrl":"10.1080/08989621.2023.2180359","url":null,"abstract":"<p><p>This letter to the editor argues that if academic journals are willing to accept papers that include NLP-generated content under certain conditions, editorial policies should clarify the proportion of NLP-generated content in the paper. Excessive use of NLP-generated content should be considered as academic misconduct.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1242-1243"},"PeriodicalIF":2.8,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10757342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}