Pub Date : 2025-03-05DOI: 10.1186/s41073-025-00159-x
Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes
Background: We assess if there are indications that results of registry-based studies comparing the effectiveness of interventions might be selectively missing depending on the statistical significance (p < 0.05).
Methods: Eligibility criteria Sample of cohort type studies that used data from a patient registry, compared two study arms for assessing a medical intervention, and reported an effect for a binary outcome. Information sources We searched PubMed to identify registries in seven different medical specialties in 2022/23. Subsequently, we included all studies that satisfied the eligibility criteria for each of the identified registries and collected p-values from these studies. Synthesis of results We plotted the cumulative distribution of p-values and a histogram of absolute z-scores for visual inspection of selectively missing results because of p-hacking, selective reporting, or publication bias. In addition, we tested for publication bias by applying a caliper test.
Results: Included studies Sample of 150 registry-based cohort type studies. Synthesis of results The cumulative distribution of p-values displays an abrupt, heavy increase just below the significance threshold of 0.05 while the distribution above the threshold shows a slow, gradual increase. The p-value of the caliper test with a 10% caliper was 0.011 (k = 2, N = 13).
Conclusions: We found that the results of registry-based studies might be selectively missing. Results from registry-based studies comparing medical interventions should be interpreted very cautiously, as positive findings could be a result from p-hacking, publication bias, or selective reporting. Prospective registration of such studies is necessary and should be made mandatory both in regulatory contexts and for publication in journals. Further research is needed to determine the main reasons for selectively missing results to support the development and implementation of more specific methods for preventing selectively missing results.
{"title":"Analysis of indications for selectively missing results in comparative registry-based studies in medicine: a meta-research study.","authors":"Paula Starke, Zhentian Zhang, Hannah Papmeier, Dawid Pieper, Tim Mathes","doi":"10.1186/s41073-025-00159-x","DOIUrl":"10.1186/s41073-025-00159-x","url":null,"abstract":"<p><strong>Background: </strong>We assess if there are indications that results of registry-based studies comparing the effectiveness of interventions might be selectively missing depending on the statistical significance (p < 0.05).</p><p><strong>Methods: </strong>Eligibility criteria Sample of cohort type studies that used data from a patient registry, compared two study arms for assessing a medical intervention, and reported an effect for a binary outcome. Information sources We searched PubMed to identify registries in seven different medical specialties in 2022/23. Subsequently, we included all studies that satisfied the eligibility criteria for each of the identified registries and collected p-values from these studies. Synthesis of results We plotted the cumulative distribution of p-values and a histogram of absolute z-scores for visual inspection of selectively missing results because of p-hacking, selective reporting, or publication bias. In addition, we tested for publication bias by applying a caliper test.</p><p><strong>Results: </strong>Included studies Sample of 150 registry-based cohort type studies. Synthesis of results The cumulative distribution of p-values displays an abrupt, heavy increase just below the significance threshold of 0.05 while the distribution above the threshold shows a slow, gradual increase. The p-value of the caliper test with a 10% caliper was 0.011 (k = 2, N = 13).</p><p><strong>Conclusions: </strong>We found that the results of registry-based studies might be selectively missing. Results from registry-based studies comparing medical interventions should be interpreted very cautiously, as positive findings could be a result from p-hacking, publication bias, or selective reporting. Prospective registration of such studies is necessary and should be made mandatory both in regulatory contexts and for publication in journals. Further research is needed to determine the main reasons for selectively missing results to support the development and implementation of more specific methods for preventing selectively missing results.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"2"},"PeriodicalIF":7.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11881244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-28DOI: 10.1186/s41073-025-00158-y
Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng
Background: Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors' responsible use of AI chatbots.
Methods: This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023-December 2023). Data was categorized into policy elements, such as 'proofreading' and 'image generation'. Counts and percentages of 'yes' (i.e., permitted), 'no', and 'no available information' (NAI) were established for each policy element.
Results: A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors' use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.
Conclusions: Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.
{"title":"Policies on artificial intelligence chatbots among academic publishers: a cross-sectional audit.","authors":"Daivat Bhavsar, Laura Duffy, Hamin Jo, Cynthia Lokker, R Brian Haynes, Alfonso Iorio, Ana Marusic, Jeremy Y Ng","doi":"10.1186/s41073-025-00158-y","DOIUrl":"10.1186/s41073-025-00158-y","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) chatbots are novel computer programs that can generate text or content in a natural language format. Academic publishers are adapting to the transformative role of AI chatbots in producing or facilitating scientific research. This study aimed to examine the policies established by scientific, technical, and medical academic publishers for defining and regulating the authors' responsible use of AI chatbots.</p><p><strong>Methods: </strong>This study performed a cross-sectional audit on the publicly available policies of 162 academic publishers, indexed as members of the International Association of the Scientific, Technical, and Medical Publishers (STM). Data extraction of publicly available policies on the webpages of all STM academic publishers was performed independently, in duplicate, with content analysis reviewed by a third contributor (September 2023-December 2023). Data was categorized into policy elements, such as 'proofreading' and 'image generation'. Counts and percentages of 'yes' (i.e., permitted), 'no', and 'no available information' (NAI) were established for each policy element.</p><p><strong>Results: </strong>A total of 56/162 (34.6%) STM academic publishers had a publicly available policy guiding the authors' use of AI chatbots. No policy allowed authorship for AI chatbots (or other AI tool). Most (49/56 or 87.5%) required specific disclosure of AI chatbot use. Four policies/publishers placed a complete ban on the use of AI chatbots by authors.</p><p><strong>Conclusions: </strong>Only a third of STM academic publishers had publicly available policies as of December 2023. A re-examination of all STM members in 12-18 months may uncover evolving approaches toward AI chatbot use with more academic publishers having a policy.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"10 1","pages":"1"},"PeriodicalIF":7.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11869395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-23DOI: 10.1186/s41073-024-00157-5
Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel
{"title":"Publisher Correction: Developing the Clarity and Openness in Reporting: E3-based (CORE) Reference user manual for creation of clinical study reports in the era of clinical trial transparency.","authors":"Samina Hamilton, Aaron B Bernstein, Graham Blakey, Vivien Fagan, Tracy Farrow, Debbie Jordan, Walther Seiler, Anna Shannon, Art Gertel","doi":"10.1186/s41073-024-00157-5","DOIUrl":"10.1186/s41073-024-00157-5","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"16"},"PeriodicalIF":7.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668038/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1186/s41073-024-00154-8
Adam G Dunn, Enrico Coiera, Kenneth D Mandl, Florence T Bourgeois
{"title":"Publisher Correction: Conflict of interest disclosure in biomedical research: a review of current practices, biases, and the role of public registries in improving transparency.","authors":"Adam G Dunn, Enrico Coiera, Kenneth D Mandl, Florence T Bourgeois","doi":"10.1186/s41073-024-00154-8","DOIUrl":"10.1186/s41073-024-00154-8","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"13"},"PeriodicalIF":7.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660574/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1186/s41073-024-00156-6
Paul E van der Vet, Harm Nijveen
{"title":"Publisher Correction: Propagation of errors in citation networks: a study involving the entire citation network of a widely cited paper published in, and later retracted from, the journal Nature.","authors":"Paul E van der Vet, Harm Nijveen","doi":"10.1186/s41073-024-00156-6","DOIUrl":"10.1186/s41073-024-00156-6","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"14"},"PeriodicalIF":7.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1186/s41073-024-00155-7
Shirin Heidari, Thomas F Babor, Paola De Castro, Sera Tort, Mirjam Curno
{"title":"Publisher Correction: Sex and Gender Equity in Research: rationale for the SAGER guidelines and recommended use.","authors":"Shirin Heidari, Thomas F Babor, Paola De Castro, Sera Tort, Mirjam Curno","doi":"10.1186/s41073-024-00155-7","DOIUrl":"10.1186/s41073-024-00155-7","url":null,"abstract":"","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"15"},"PeriodicalIF":7.2,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142873627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-14DOI: 10.1186/s41073-024-00151-x
Robin Brooker, Nick Allum
Background: This study investigates the determinants of engagement in questionable research practices (QRPs), focusing on both individual-level factors (such as scholarly field, commitment to scientific norms, gender, contract type, and career stage) and institution-level factors (including industry type, researchers' perceptions of their research culture, and awareness of institutional policies on research integrity).
Methods: Using a multi-level modelling approach, we analyse data from an international survey of researchers working across disciplinary fields to estimate the effect of these factors on QRP engagement.
Results: Our findings indicate that contract type, career stage, academic field, adherence to scientific norms and gender significantly predict QRP engagement. At the institution level, factors such as being outside of a collegial culture and experiencing harmful publication pressure, and the presence of safeguards against integrity breaches have small associations. Only a minimal amount of variance in QRP engagement is attributable to differences between institutions and countries.
Conclusions: We discuss the implications of these findings for developing effective interventions to reduce QRPs, highlighting the importance of addressing both individual and institutional factors in efforts to foster research integrity.
{"title":"Investigating the links between questionable research practices, scientific norms and organisational culture.","authors":"Robin Brooker, Nick Allum","doi":"10.1186/s41073-024-00151-x","DOIUrl":"https://doi.org/10.1186/s41073-024-00151-x","url":null,"abstract":"<p><strong>Background: </strong>This study investigates the determinants of engagement in questionable research practices (QRPs), focusing on both individual-level factors (such as scholarly field, commitment to scientific norms, gender, contract type, and career stage) and institution-level factors (including industry type, researchers' perceptions of their research culture, and awareness of institutional policies on research integrity).</p><p><strong>Methods: </strong>Using a multi-level modelling approach, we analyse data from an international survey of researchers working across disciplinary fields to estimate the effect of these factors on QRP engagement.</p><p><strong>Results: </strong>Our findings indicate that contract type, career stage, academic field, adherence to scientific norms and gender significantly predict QRP engagement. At the institution level, factors such as being outside of a collegial culture and experiencing harmful publication pressure, and the presence of safeguards against integrity breaches have small associations. Only a minimal amount of variance in QRP engagement is attributable to differences between institutions and countries.</p><p><strong>Conclusions: </strong>We discuss the implications of these findings for developing effective interventions to reduce QRPs, highlighting the importance of addressing both individual and institutional factors in efforts to foster research integrity.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"12"},"PeriodicalIF":7.2,"publicationDate":"2024-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11472529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Preprints are scientific articles that have not undergone the peer-review process. They allow the latest evidence to be rapidly shared, however it is unclear whether they can be confidently used for decision-making during a public health emergency. This study aimed to compare the data and quality of preprints released during the first four months of the 2022 mpox outbreak to their published versions.
Methods: Eligible preprints (n = 76) posted between May to August 2022 were identified through an established mpox literature database and followed to July 2024 for changes in publication status. Quality of preprints and published studies was assessed by two independent reviewers to evaluate changes in quality, using validated tools that were available for the study design (n = 33). Tools included the Newcastle-Ottawa Scale; Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2); and JBI Critical Appraisal Checklists. The questions in each tool led to an overall quality assessment of high quality (no concerns with study design, conduct, and/or analysis), moderate quality (minor concerns) or low quality (several concerns). Changes in data (e.g. methods, outcomes, results) for preprint-published pairs (n = 60) were assessed by one reviewer and verified by a second.
Results: Preprints and published versions that could be evaluated for quality (n = 25 pairs) were mostly assessed as low quality. Minimal to no change in quality from preprint to published was identified: all observational studies (10/10), most case series (6/7) and all surveillance data analyses (3/3) had no change in overall quality, while some diagnostic test accuracy studies (3/5) improved or worsened their quality assessment scores. Among all pairs (n = 60), outcomes were often added in the published version (58%) and less commonly removed (18%). Numerical results changed from preprint to published in 53% of studies, however most of these studies (22/32) had changes that were minor and did not impact main conclusions of the study.
Conclusions: This study suggests the minimal changes in quality, results and main conclusions from preprint to published versions supports the use of preprints, and the use of the same critical evaluation tools on preprints as applied to published studies, in decision-making during a public health emergency.
{"title":"An evaluation of the preprints produced at the beginning of the 2022 mpox public health emergency.","authors":"Melanie Sterian, Anmol Samra, Kusala Pussegoda, Tricia Corrin, Mavra Qamar, Austyn Baumeister, Izza Israr, Lisa Waddell","doi":"10.1186/s41073-024-00152-w","DOIUrl":"10.1186/s41073-024-00152-w","url":null,"abstract":"<p><strong>Background: </strong>Preprints are scientific articles that have not undergone the peer-review process. They allow the latest evidence to be rapidly shared, however it is unclear whether they can be confidently used for decision-making during a public health emergency. This study aimed to compare the data and quality of preprints released during the first four months of the 2022 mpox outbreak to their published versions.</p><p><strong>Methods: </strong>Eligible preprints (n = 76) posted between May to August 2022 were identified through an established mpox literature database and followed to July 2024 for changes in publication status. Quality of preprints and published studies was assessed by two independent reviewers to evaluate changes in quality, using validated tools that were available for the study design (n = 33). Tools included the Newcastle-Ottawa Scale; Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2); and JBI Critical Appraisal Checklists. The questions in each tool led to an overall quality assessment of high quality (no concerns with study design, conduct, and/or analysis), moderate quality (minor concerns) or low quality (several concerns). Changes in data (e.g. methods, outcomes, results) for preprint-published pairs (n = 60) were assessed by one reviewer and verified by a second.</p><p><strong>Results: </strong>Preprints and published versions that could be evaluated for quality (n = 25 pairs) were mostly assessed as low quality. Minimal to no change in quality from preprint to published was identified: all observational studies (10/10), most case series (6/7) and all surveillance data analyses (3/3) had no change in overall quality, while some diagnostic test accuracy studies (3/5) improved or worsened their quality assessment scores. Among all pairs (n = 60), outcomes were often added in the published version (58%) and less commonly removed (18%). Numerical results changed from preprint to published in 53% of studies, however most of these studies (22/32) had changes that were minor and did not impact main conclusions of the study.</p><p><strong>Conclusions: </strong>This study suggests the minimal changes in quality, results and main conclusions from preprint to published versions supports the use of preprints, and the use of the same critical evaluation tools on preprints as applied to published studies, in decision-making during a public health emergency.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"11"},"PeriodicalIF":7.2,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457328/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-30DOI: 10.1186/s41073-024-00150-y
Jonas Heymann, Naichuan Su, Clovis Mariano Faggion
Background: Reporting conflicts of interest (COI) and sources of sponsorship are of paramount importance in adequately interpreting the results of systematic reviews. Some evidence suggests that there is an influence of COI and sponsorship on the study results. The objectives of this meta-research study were twofold: (a) to assess the reporting of COI and sponsorship statements in systematic reviews published in dentistry in three sources (abstract, journal's website and article's full text) and (b) to assess the associations between the characteristics of the systematic reviews and reporting of COI.
Methods: We searched the PubMed database for dental systematic reviews published from database inception to June 2023. We assessed how COI and sponsorship statements were reported in the three sources. We performed a logistic regression analysis to assess the associations between the characteristics of the systematic reviews and the reporting of COI.
Results: We assessed 924 abstracts published in PubMed and on the corresponding journals´ websites. Similarly, full texts associated with the 924 abstracts were also assessed. A total of 639 (69%) and 795 (88%) studies had no statement of COI in the abstracts on PubMed and the journal's website, respectively. In contrast, a COI statement was reported in 801 (87%) full texts. Sponsorship statements were not reported in 911 (99%) and 847 (93%) abstracts published in PubMed and a journal´s website, respectively. Nearly two-thirds of the full-text articles (N = 607) included sponsorship statements. Journal access was significantly associated with COI statement reporting in all three sources. Open-access journals have significantly higher odds to report COI in PubMed and full-texts, while have significantly lower odds to report COI in the websites, compared with subscription or hybrid journals. Abstract type was significantly associated with COI statement reporting on the journal's website and in the full text. Review registration based on the full text and the number of authors were significantly associated with COI statement reporting in PubMed and in the full texts. Several other variables were found to be significantly associated with COI statement reporting in one of the three sources.
Conclusions: COI and sponsorship statements seem to be underreported in the abstracts and homepage of the journals, compared to the full-texts. These results were particularly more pronounced in abstracts published in both the PubMed database and the journals' websites. Several characteristics of systematic reviews were associated with COI statement reporting.
背景:报告利益冲突(COI)和赞助来源对于充分解释系统综述的结果至关重要。一些证据表明,利益冲突和赞助对研究结果有影响。这项荟萃研究有两个目的:(a) 评估牙科领域发表的系统综述中三种来源(摘要、期刊网站和文章全文)的 COI 和赞助声明的报告情况;(b) 评估系统综述的特点与 COI 报告之间的关联:我们在 PubMed 数据库中搜索了从数据库建立到 2023 年 6 月发表的牙科系统综述。我们评估了三个来源中如何报告 COI 和赞助声明。我们进行了逻辑回归分析,以评估系统综述的特征与 COI 报告之间的关联:我们评估了发表在 PubMed 和相应期刊网站上的 924 篇摘要。同样,我们还评估了与这 924 篇摘要相关的全文。在 PubMed 和期刊网站上,分别有 639 项(69%)和 795 项(88%)研究的摘要中没有 COI 声明。相比之下,有 801 篇(87%)全文报告了 COI 声明。在 PubMed 和期刊网站上发表的摘要中,分别有 911 篇(99%)和 847 篇(93%)未报告赞助声明。近三分之二的全文文章(N = 607)包含赞助声明。在所有三个来源中,期刊的获取与COI声明的报告都有很大关系。与订阅期刊或混合期刊相比,开放获取期刊在 PubMed 和全文中报告 COI 的几率明显较高,而在网站中报告 COI 的几率则明显较低。摘要类型与期刊网站和全文中的COI声明报告有很大关系。基于全文的审稿注册和作者人数与在 PubMed 和全文中报告 COI 声明显著相关。其他几个变量也与三个来源之一的 COI 声明报告有明显关联:结论:与全文相比,COI 和赞助声明似乎在期刊摘要和主页中报告不足。这些结果在 PubMed 数据库和期刊网站发表的摘要中尤为明显。系统综述的几个特征与COI声明的报告有关。
{"title":"Differences in the reporting of conflicts of interest and sponsorships in systematic reviews with meta-analyses in dentistry: an examination of factors associated with their reporting.","authors":"Jonas Heymann, Naichuan Su, Clovis Mariano Faggion","doi":"10.1186/s41073-024-00150-y","DOIUrl":"10.1186/s41073-024-00150-y","url":null,"abstract":"<p><strong>Background: </strong>Reporting conflicts of interest (COI) and sources of sponsorship are of paramount importance in adequately interpreting the results of systematic reviews. Some evidence suggests that there is an influence of COI and sponsorship on the study results. The objectives of this meta-research study were twofold: (a) to assess the reporting of COI and sponsorship statements in systematic reviews published in dentistry in three sources (abstract, journal's website and article's full text) and (b) to assess the associations between the characteristics of the systematic reviews and reporting of COI.</p><p><strong>Methods: </strong>We searched the PubMed database for dental systematic reviews published from database inception to June 2023. We assessed how COI and sponsorship statements were reported in the three sources. We performed a logistic regression analysis to assess the associations between the characteristics of the systematic reviews and the reporting of COI.</p><p><strong>Results: </strong>We assessed 924 abstracts published in PubMed and on the corresponding journals´ websites. Similarly, full texts associated with the 924 abstracts were also assessed. A total of 639 (69%) and 795 (88%) studies had no statement of COI in the abstracts on PubMed and the journal's website, respectively. In contrast, a COI statement was reported in 801 (87%) full texts. Sponsorship statements were not reported in 911 (99%) and 847 (93%) abstracts published in PubMed and a journal´s website, respectively. Nearly two-thirds of the full-text articles (N = 607) included sponsorship statements. Journal access was significantly associated with COI statement reporting in all three sources. Open-access journals have significantly higher odds to report COI in PubMed and full-texts, while have significantly lower odds to report COI in the websites, compared with subscription or hybrid journals. Abstract type was significantly associated with COI statement reporting on the journal's website and in the full text. Review registration based on the full text and the number of authors were significantly associated with COI statement reporting in PubMed and in the full texts. Several other variables were found to be significantly associated with COI statement reporting in one of the three sources.</p><p><strong>Conclusions: </strong>COI and sponsorship statements seem to be underreported in the abstracts and homepage of the journals, compared to the full-texts. These results were particularly more pronounced in abstracts published in both the PubMed database and the journals' websites. Several characteristics of systematic reviews were associated with COI statement reporting.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"10"},"PeriodicalIF":7.2,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11443767/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142334024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23DOI: 10.1186/s41073-024-00149-5
Krishna Subedi, Nuwadatta Subedi, Rebicca Ranjit
Background: This study was conducted to assess the knowledge and ongoing practices of plagiarism among the journal editors of Nepal.
Methods: This web-based questionnaire analytical cross-sectional was conducted among journal editors working across various journals in Nepal. All journal editors from NepJOL-indexed journals in Nepal who provided e-consent were included in the study using a convenience sampling technique. A final set of questionnaires was prepared using Google Forms, including six knowledge questions, three practice questions (with subsets) for authors, and four (with subsets) for editors. These were distributed to journal editors in Nepal via email, Facebook Messenger, Viber, and WhatsApp. Reminders were sent weekly, up to three times. Data analysis was done in R software. Frequencies and percentages were calculated for the demographic variables, correct responses regarding knowledge, and practices related to plagiarism. Independent t-test and one-way ANOVA were used to compare mean knowledge with demographic variables. For all tests, statistical significance was set at p < 0.05.
Results: A total of 147 participants completed the survey.The mean age of the participants was found to be 43.61 ± 8.91 years. Nearly all participants were aware of plagiarism, and most had heard of both Turnitin and iThenticate. Slightly more than three-fourths correctly identified that citation and referencing can avoid plagiarism. The overall mean knowledge score was 5.32 ± 0.99, with no significant differences across demographic variables. As authors, 4% admitted to copying sections of others' work without acknowledgment and reusing their own published work without proper citations. Just over one-fifth did not use plagiarism detection software when writing research articles. Fewer than half reported that their journals used authentic plagiarism detection software. Four-fifths of them suspected plagiarism in the manuscripts assigned through their journal. Three out of every five participants reported the plagiarism used in the manuscript to the respective authors. Nearly all participants believe every journal must have plagiarism-detection software.
Conclusions: Although journal editors' knowledge and practices regarding plagiarism appear to be high, they are still not satisfactory. It is strongly recommended to use authentic plagiarism detection software by the journals and editors should be adequately trained and update their knowledge about it.
研究背景本研究旨在评估尼泊尔期刊编辑对剽窃行为的认识和现行做法:这项基于网络的横断面分析问卷调查在尼泊尔各种期刊的编辑中进行。采用便利抽样技术,将尼泊尔所有提供电子同意书的尼泊尔期刊编辑纳入研究范围。我们使用谷歌表格编制了一套最终问卷,其中包括六个知识问题、三个针对作者的实践问题(含子集)和四个针对编辑的实践问题(含子集)。这些问卷通过电子邮件、Facebook Messenger、Viber 和 WhatsApp 分发给尼泊尔的期刊编辑。提醒邮件每周发送一次,最多可发送三次。数据分析使用 R 软件进行。计算了人口统计学变量、对知识的正确回答以及与剽窃相关的做法的频率和百分比。使用独立 t 检验和单因素方差分析来比较知识平均值和人口统计学变量。所有检验的统计显著性均以 p 为标准:共有 147 名参与者完成了调查,平均年龄为 43.61 ± 8.91 岁。几乎所有参与者都了解抄袭行为,大多数人都听说过 Turnitin 和 iThenticate。略多于四分之三的人正确地指出了引用和参考文献可以避免抄袭。知识总平均得分为 5.32 ± 0.99,不同人口统计学变量之间无显著差异。作为作者,4% 的人承认抄袭过他人作品的部分内容,但没有注明出处,也重复使用过自己发表的作品,但没有适当引用。五分之一多一点的人在撰写研究文章时没有使用剽窃检测软件。只有不到一半的人报告说他们的期刊使用了真正的剽窃检测软件。五分之四的参与者怀疑其期刊所发稿件存在抄袭现象。每五位参与者中就有三位向相关作者报告了稿件中的抄袭行为。几乎所有参与者都认为每份期刊都必须有剽窃检测软件:尽管期刊编辑对剽窃问题的认识和做法似乎很高,但仍不能令人满意。强烈建议期刊使用正宗的剽窃检测软件,编辑应接受适当的培训并更新相关知识。
{"title":"Knowledge and practices of plagiarism among journal editors of Nepal.","authors":"Krishna Subedi, Nuwadatta Subedi, Rebicca Ranjit","doi":"10.1186/s41073-024-00149-5","DOIUrl":"10.1186/s41073-024-00149-5","url":null,"abstract":"<p><strong>Background: </strong>This study was conducted to assess the knowledge and ongoing practices of plagiarism among the journal editors of Nepal.</p><p><strong>Methods: </strong>This web-based questionnaire analytical cross-sectional was conducted among journal editors working across various journals in Nepal. All journal editors from NepJOL-indexed journals in Nepal who provided e-consent were included in the study using a convenience sampling technique. A final set of questionnaires was prepared using Google Forms, including six knowledge questions, three practice questions (with subsets) for authors, and four (with subsets) for editors. These were distributed to journal editors in Nepal via email, Facebook Messenger, Viber, and WhatsApp. Reminders were sent weekly, up to three times. Data analysis was done in R software. Frequencies and percentages were calculated for the demographic variables, correct responses regarding knowledge, and practices related to plagiarism. Independent t-test and one-way ANOVA were used to compare mean knowledge with demographic variables. For all tests, statistical significance was set at p < 0.05.</p><p><strong>Results: </strong>A total of 147 participants completed the survey.The mean age of the participants was found to be 43.61 ± 8.91 years. Nearly all participants were aware of plagiarism, and most had heard of both Turnitin and iThenticate. Slightly more than three-fourths correctly identified that citation and referencing can avoid plagiarism. The overall mean knowledge score was 5.32 ± 0.99, with no significant differences across demographic variables. As authors, 4% admitted to copying sections of others' work without acknowledgment and reusing their own published work without proper citations. Just over one-fifth did not use plagiarism detection software when writing research articles. Fewer than half reported that their journals used authentic plagiarism detection software. Four-fifths of them suspected plagiarism in the manuscripts assigned through their journal. Three out of every five participants reported the plagiarism used in the manuscript to the respective authors. Nearly all participants believe every journal must have plagiarism-detection software.</p><p><strong>Conclusions: </strong>Although journal editors' knowledge and practices regarding plagiarism appear to be high, they are still not satisfactory. It is strongly recommended to use authentic plagiarism detection software by the journals and editors should be adequately trained and update their knowledge about it.</p>","PeriodicalId":74682,"journal":{"name":"Research integrity and peer review","volume":"9 1","pages":"9"},"PeriodicalIF":7.2,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11342615/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142037940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}